1
0
Fork 0

Compare commits

...

5 Commits

6 changed files with 882 additions and 791 deletions

View File

@ -7,7 +7,7 @@ For an edge-weighted graph, a _maximum weight matching_ is a matching that achie
the largest possible sum of weights of matched edges. the largest possible sum of weights of matched edges.
The code in this repository is based on a variant of the blossom algorithm that runs in The code in this repository is based on a variant of the blossom algorithm that runs in
_O(n \* m \* log(n))_ steps. _O(n m log n)_ steps.
See the file [Algorithm.md](doc/Algorithm.md) for a detailed description. See the file [Algorithm.md](doc/Algorithm.md) for a detailed description.
@ -44,7 +44,7 @@ The folder [cpp/](cpp/) contains a header-only C++ implementation of maximum wei
**NOTE:** **NOTE:**
The C++ code currently implements a slower algorithm that runs in _O(n<sup>3</sup>)_ steps. The C++ code currently implements a slower algorithm that runs in _O(n<sup>3</sup>)_ steps.
I plan to eventually update the C++ code to implement the faster _O(n*m*log(n))_ algorithm. I plan to eventually update the C++ code to implement the faster _O(n m log n)_ algorithm.
The C++ code is self-contained and can easily be linked into an application. The C++ code is self-contained and can easily be linked into an application.
It is also reasonably efficient. It is also reasonably efficient.

View File

@ -4,19 +4,20 @@
## Introduction ## Introduction
This document describes the implementation of an algorithm that computes a maximum weight matching This document describes the implementation of an algorithm that computes a maximum weight matching
in a general graph in time _O(n<sup>3</sup>)_, where _n_ is the number of vertices in the graph. in a general graph in time _O(n (n + m) log n)_, where _n_ is the number of vertices in
the graph and _m_ is the number of edges.
In graph theory, a _matching_ is a subset of edges such that none of them share a common vertex. In graph theory, a _matching_ is a subset of edges such that none of them share a common vertex.
A _maximum cardinality matching_ is a matching that contains the largest possible number of edges A _maximum cardinality matching_ is a matching that contains the largest possible number of edges
(or equivalently, the largest possible number of vertices). (or equivalently, the largest possible number of vertices).
For a graph that has weights attached to its edges, a _maximum weight matching_ If a graph has weights assigned to its edges, a _maximum weight matching_
is a matching that achieves the largest possible sum of weights of matched edges. is a matching that achieves the largest possible sum of weights of matched edges.
An algorithm for maximum weight matching can obviously also be used to calculate a maximum An algorithm for maximum weight matching can obviously also be used to calculate a maximum
cardinality matching by simply assigning weight 1 to all edges. cardinality matching by simply assigning weight 1 to all edges.
Certain computer science problems can be understood as _restrictions_ of maximum weighted matching Certain related problems can be understood as _restrictions_ of maximum weighted matching
in general graphs. in general graphs.
Examples are: maximum matching in bipartite graphs, maximum cardinality matching in general graphs, Examples are: maximum matching in bipartite graphs, maximum cardinality matching in general graphs,
and maximum weighted matching in general graphs with edge weights limited to integers and maximum weighted matching in general graphs with edge weights limited to integers
@ -48,7 +49,7 @@ It is based on the ideas of Edmonds, but uses different data structures to reduc
of work. of work.
In 1983, Galil, Micali and Gabow published a maximum weighted matching algorithm that runs in In 1983, Galil, Micali and Gabow published a maximum weighted matching algorithm that runs in
time _O(n\*m\*log(n))_ [[4]](#galil_micali_gabow1986) . time _O(n m log n)_ [[4]](#galil_micali_gabow1986) .
It is an implementation of Edmonds' blossom algorithm that uses advanced data structures It is an implementation of Edmonds' blossom algorithm that uses advanced data structures
to speed up critical parts of the algorithm. to speed up critical parts of the algorithm.
This algorithm is asymptotically faster than _O(n<sup>3</sup>)_ for sparse graphs, This algorithm is asymptotically faster than _O(n<sup>3</sup>)_ for sparse graphs,
@ -63,45 +64,32 @@ of the literature.
The paper describes a maximum weighted matching algorithm that is similar to Edmonds' The paper describes a maximum weighted matching algorithm that is similar to Edmonds'
blossom algorithm, but carefully implemented to run in time _O(n<sup>3</sup>)_. blossom algorithm, but carefully implemented to run in time _O(n<sup>3</sup>)_.
It then sketches how advanced data structures can be added to arrive at the Galil-Micali-Gabow It then sketches how advanced data structures can be added to arrive at the Galil-Micali-Gabow
algorithm that runs in time _O(n\*m\*log(n))_. algorithm that runs in time _O(n m log n)_.
In 1990, Gabow published a maximum weighted matching algorithm that runs in time In 1990, Gabow published a maximum weighted matching algorithm that runs in time
_O(n\*m + n<sup>2</sup>\*log(n))_ [[6]](#gabow1990) . _O(n m + n<sup>2</sup> log n)_ [[6]](#gabow1990) .
It uses several advanced data structures, including Fibonacci heaps. It uses several advanced data structures, including Fibonacci heaps.
Unfortunately I don't understand this algorithm at all. Unfortunately I don't understand this algorithm at all.
## Choosing an algorithm ## Choosing an algorithm
I selected the _O(n<sup>3</sup>)_ variant of Edmonds' blossom algorithm as described by I selected the _O(n m log n)_ algorithm by Galil, Micali and Gabow
Galil [[5]](#galil1986) . [[4]](#galil_micali_gabow1986).
This algorithm is usually credited to Gabow [[3]](#gabow1974), but I find the description
in [[5]](#galil1986) easier to understand.
This is generally a fine algorithm. This algorithm is asymptotically optimal for sparse graphs.
One of its strengths is that it is relatively easy to understand, especially compared It has also been shown to be quite fast in practice on several types of graphs
to the more recent algorithms. including random graphs [[7]](#mehlhorn_schafer2002).
Its run time is asymptotically optimal for complete graphs (graphs that have an edge
between every pair of vertices).
On the other hand, the algorithm is suboptimal for sparse graphs.
It is possible to construct highly sparse graphs, having _m = O(n)_,
that cause this algorithm to perform _&Theta;(n<sup>3</sup>)_ steps.
In such cases the Galil-Micali-Gabow algorithm would be significantly faster.
Then again, there is a question of how important the worst case is for practical applications.
I suspect that the simple algorithm typically needs only about _O(n\*m)_ steps when running
on random sparse graphs with random weights, i.e. much faster than its worst case bound.
After trading off these properties, I decided that I prefer an algorithm that is understandable
and has decent performance, over an algorithm that is faster in specific cases but also
significantly more complicated.
This algorithm is more difficult to implement than the older _O(n<sup>3</sup>)_ algorithm.
In particular, it requires a specialized data structure to implement concatenable priority queues.
This increases the size and complexity of the code quite a bit.
However, in my opinion the performance improvement is worth the extra effort.
## Description of the algorithm ## Description of the algorithm
My implementation closely follows the description by Zvi Galil in [[5]](#galil1986) . My implementation roughly follows the description by Zvi Galil in [[5]](#galil1986) .
I recommend to read that paper before diving into my description below. I recommend reading that paper before diving into my description below.
The paper explains the algorithm in depth and shows how it relates to matching The paper explains the algorithm in depth and shows how it relates to matching
in bipartite graphs and non-weighted graphs. in bipartite graphs and non-weighted graphs.
@ -253,14 +241,14 @@ have _slack_.
An augmenting path that consists only of tight edges is _guaranteed_ to increase the weight An augmenting path that consists only of tight edges is _guaranteed_ to increase the weight
of the matching as much as possible. of the matching as much as possible.
While searching for an augmenting path, we simply restrict the search to tight edges, While searching for an augmenting path, we restrict the search to tight edges,
ignoring all edges that have slack. ignoring all edges that have slack.
Certain explicit actions of the algorithm cause edges to become tight or slack. Certain explicit actions of the algorithm cause edges to become tight or slack.
How this works will be explained later. How this works will be explained later.
To find an augmenting path, the algorithm searches for alternating paths that start To find an augmenting path, the algorithm searches for alternating paths that start
in an unmatched vertex. in an unmatched vertex.
The collection of alternating paths forms a forest of trees. The collection of such alternating paths forms a forest of trees.
Each tree is rooted in an unmatched vertex, and all paths from the root to the leaves of a tree Each tree is rooted in an unmatched vertex, and all paths from the root to the leaves of a tree
are alternating paths. are alternating paths.
The nodes in these trees are top-level blossoms. The nodes in these trees are top-level blossoms.
@ -289,8 +277,8 @@ an odd-length alternating cycle.
The lowest common ancestor node in the alternating tree forms the beginning and end The lowest common ancestor node in the alternating tree forms the beginning and end
of the alternating cycle. of the alternating cycle.
In this case a new blossom must be created by shrinking the cycle. In this case a new blossom must be created by shrinking the cycle.
If the two S-blossoms are in different alternating trees, the edge that links the blossoms On the other hand, if the two S-blossoms are in different alternating trees,
is part of an augmenting path between the roots of the two trees. the edge that links the blossoms is part of an augmenting path between the roots of the two trees.
![Figure 3](figures/graph3.png) <br> ![Figure 3](figures/graph3.png) <br>
*Figure 3: Growing alternating trees* *Figure 3: Growing alternating trees*
@ -314,7 +302,7 @@ The search procedure considers these vertices one-by-one and tries to use them t
either grow the alternating tree (thus adding new vertices to the queue), either grow the alternating tree (thus adding new vertices to the queue),
or discover an augmenting path or a new blossom. or discover an augmenting path or a new blossom.
In detail, the search for an augmenting path proceeds as follows: The search for an augmenting path proceeds as follows:
- Mark all top-level blossoms as _unlabeled_. - Mark all top-level blossoms as _unlabeled_.
- Initialize an empty queue _Q_. - Initialize an empty queue _Q_.
@ -469,9 +457,9 @@ $$ \pi_{x,y} = u_x + u_y + \sum_{(x,y) \in B} z_B - w_{x,y} $$
An edge is _tight_ if and only if its slack is zero. An edge is _tight_ if and only if its slack is zero.
Given the values of the dual variables, it is very easy to calculate the slack of an edge Given the values of the dual variables, it is very easy to calculate the slack of an edge
which is not contained in any blossom: simply add the duals of its incident vertices and which is not contained in any blossom: add the duals of its incident vertices and
subtract the weight. subtract the weight.
To check whether an edge is tight, simply compute its slack and check whether it is zero. To check whether an edge is tight, simply compute its slack and compare it to zero.
Calculating the slack of an edge that is contained in one or more blossoms is a little tricky, Calculating the slack of an edge that is contained in one or more blossoms is a little tricky,
but fortunately we don't need such calculations. but fortunately we don't need such calculations.
@ -504,7 +492,7 @@ At that point the maximum weight matching has been found.
When the matching algorithm is finished, the constraints can be checked to verify When the matching algorithm is finished, the constraints can be checked to verify
that the matching is optimal. that the matching is optimal.
This check is simpler and faster than the matching algorithm itself. This check is simpler and faster than the matching algorithm itself.
It can therefore be a useful way to guard against bugs in the matching algorithm. It can therefore be a useful way to guard against bugs in the algorithm.
### Rules for updating dual variables ### Rules for updating dual variables
@ -534,11 +522,12 @@ It then changes dual variables as follows:
- _z<sub>B</sub> &larr; z<sub>B</sub> &#x2212; 2 * &delta;_ for every non-trivial T-blossom _B_ - _z<sub>B</sub> &larr; z<sub>B</sub> &#x2212; 2 * &delta;_ for every non-trivial T-blossom _B_
Dual variables of unlabeled blossoms and their vertices remain unchanged. Dual variables of unlabeled blossoms and their vertices remain unchanged.
Dual variables _z<sub>B</sub>_ of non-trivial sub-blossoms also remain changed; Dual variables _z<sub>B</sub>_ of non-trivial sub-blossoms also remain unchanged;
only top-level blossoms have their _z<sub>B</sub>_ updated. only top-level blossoms have their _z<sub>B</sub>_ updated.
Note that this update does not change the slack of edges that are either matched, Note that these rules ensure that no change occurs to the slack of any edge which is matched,
or linked in the alternating tree, or contained in a blossom. or part of an alternating tree, or contained in a blossom.
Such edges are tight and remain tight through the update.
However, the update reduces the slack of edges between S blossoms and edges between S-blossoms However, the update reduces the slack of edges between S blossoms and edges between S-blossoms
and unlabeled blossoms. and unlabeled blossoms.
It may cause some of these edges to become tight, allowing them to be used It may cause some of these edges to become tight, allowing them to be used
@ -569,25 +558,62 @@ If the dual update is limited by _&delta;<sub>4</sub>_, it causes the dual varia
a T-blossom to become zero. a T-blossom to become zero.
The next step is to expand that blossom. The next step is to expand that blossom.
A dual update may find that _&delta; = 0_, implying that the dual variables don't change. ### Discovering tight edges through delta steps
A delta step may find that _&delta; = 0_, implying that the dual variables don't change.
This can still be useful since all types of updates have side effects (adding an edge This can still be useful since all types of updates have side effects (adding an edge
to an alternating tree, or expanding a blossom) that allow the algorithm to make progress. to an alternating tree, or expanding a blossom) that allow the algorithm to make progress.
In fact, it is convenient to let the dual update mechanism drive the entire process of discovering
tight edges and growing alternating trees.
During a single _stage_, the algorithm may iterate several times between scanning tight edges and In my description of the search algorithm above, I stated that upon discovery of a tight edge
updating dual variables. between a newly labeled S-vertex and an unlabeled vertex or a different S-blossom, the edge should
These iterations are called _substages_. be used to grow the alternating tree or to create a new blossom or to form an augmenting path.
To clarify: A stage is the process of growing alternating trees until an augmenting path is found. However, it turns out to be easier to postpone the use of such edges until the next delta step.
While scanning newly labeled S-vertices, edges to unlabeled vertices or different S-blossoms
are discovered but not yet used.
Such edges are merely registered in a suitable data structure.
Even if the edge is tight, it is registered rather than used right away.
Once the scan completes, a delta step will be done.
If any tight edges were discovered during the scan, the delta step will find that either
_&delta;<sub>2</sub> = 0_ or _&delta;<sub>3</sub> = 0_.
The corresponding step (growing the alternating tree, creating a blossom or augmenting
the matching) will occur at that point.
If no suitable tight edges exist, a real (non-zero) change of dual variables will occur.
The search for an augmenting path becomes as follows:
- Mark all top-level blossoms as _unlabeled_.
- Initialize an empty queue _Q_.
- Assign label S to all top-level blossoms that contain an unmatched vertex.
Add all vertices inside such blossoms to _Q_.
- Repeat until either an augmenting path is found or _&delta;<sub>1</sub> = 0_:
- Scan all vertices in Q as described earlier.
Register edges to unlabeled vertices or other S-blossoms.
Do not yet use such edges to change the alternating tree, even if the edge is tight.
- Calculate _&delta;_ and update dual variables as described above.
- If _&delta; = &delta;<sub>1</sub>_, end the search.
The maximum weight matching has been found.
- If _&delta; = &delta;<sub>2</sub>_, use the corresponding edge to grow the alternating tree.
Assign label T to the unlabeled blossom.
Then assign label S to its mate and add the new S-vertices to _Q_.
- If _&delta; = &delta;<sub>3</sub>_ and the corresponding edge connects two S-blossoms
in the same alternating tree, use the edge to create a new blossom.
Add the new S-vertices to _Q_.
- If _&delta; = &delta;<sub>3</sub>_ and the corresponding edge connects two S-blossoms
in different alternating trees, use the edge to construct an augmenting path.
End the search and return the augmenting path.
- If _&delta; = &delta;<sub>4</sub>_, expand the corresponding T-blossom.
It may seem complicated, but this is actually easier.
The code that scans newly labeled S-vertices, no longer needs special treatment of tight edges.
In general, multiple updates of the dual variables are necessary during a single _stage_ of
the algorithm.
Remember that a stage is the process of growing alternating trees until an augmenting path is found.
A stage ends either by augmenting the matching, or by concluding that no further improvement A stage ends either by augmenting the matching, or by concluding that no further improvement
is possible. is possible.
Each stage consists of one or more substages.
A substage scans tight edges to grow the alternating trees.
When a substage runs out of tight edges, it ends by performing a dual variable update.
A substage also ends immediately when it finds an augmenting path.
At the end of each stage, labels and alternating trees are removed.
The matching algorithm ends when a substage ends in a dual variable update limited
by _&delta;<sub>1</sub>_.
At that point the matching has maximum weight.
### Expanding a blossom ### Expanding a blossom
@ -635,161 +661,322 @@ All vertices of sub-blossoms that got label S are inserted into _Q_.
![Figure 5](figures/graph5.png) <br> ![Figure 5](figures/graph5.png) <br>
*Figure 5: Expanding a T-blossom* *Figure 5: Expanding a T-blossom*
### Keeping track of least-slack edges ### Keeping track of the top-level blossom of each vertex
To perform a dual variable update, the algorithm needs to compute the values The algorithm often needs to find the top-level blossom _B(x)_ that contains a given vertex _x_.
A naive implementation may keep this information in an array where the element with
index _x_ holds a pointer to blossom _B(x)_.
Lookup in this array would be fast, but keeping the array up-to-date takes too much time.
There can be _O(n)_ stages, and _O(n)_ blossoms can be created or expanded during a stage,
and a blossom can contain _O(n)_ vertices,
therefore the total number of updates to the array could add up to _O(n<sup>3</sup>)_.
To solve this, we use a special data structure: a _concatenable priority queue_.
Each top-level blossom maintains an instance of this type of queue, containing its vertices.
Each vertex is a member in precisely one such queue.
To find the top-level blossom _B(x)_ that contains a given vertex _x_, we determine
the queue instance in which the vertex is a member.
This takes time _O(log n)_.
The queue instance corresponds directly to a specific blossom, which we can find
for example by storing a pointer to the blossom inside the queue instance.
When a new blossom is created, the concatenable queues of its sub-blossoms are merged
to form one concatenable queue for the new blossom.
The merged queue contains all vertices of the original queues.
Merging a pair of queues takes time _O(log n)_.
To merge the queues of _k_ sub-blossoms, the concatenation step is repeated _k-1_ times,
taking total time _O(k log n)_.
When a blossom is expanded, its concatenable queue is un-concatenated to recover separate queues
for the sub-blossoms.
This also takes time _O(log n)_ for each sub-blossom.
Implementation details of concatenable queues are discussed later in this document.
### Lazy updating of dual variables
During a delta step, the dual variables of labeled vertices and blossoms change as described above.
Updating these variables directly would take time _O(n)_ per delta step.
The total number of delta steps during a matching may be _&Theta;(n<sup>2</sup>)_,
pushing the total time to update dual variables to _O(n<sup>3</sup>)_ which is too slow.
To solve this, [[4]](#galil_micali_gabow1986) describes a technique that stores dual values
in a _modified_ form which is invariant under delta steps.
The modified values can be converted back to the true dual values when necessary.
[[7]](#mehlhorn_schafer2002) describes a slightly different technique which I find easier
to understand.
My implementation is very similar to theirs.
The first trick is to keep track of the running sum of _&delta;_ values since the beginning of the algorithm.
Let's call that number _&Delta;_.
At the start of the algorithm _&Delta; = 0_, but the value increases as the algorithm goes through delta steps.
For each non-trivial blossom, rather than storing its true dual value, we store a _modified_ dual value:
- For an S-blossom, the modified dual value is _z'<sub>B</sub> = z<sub>B</sub> - 2 &Delta;_
- For a T-blossom, the modified dual value is _z'<sub>B</sub> = z<sub>B</sub> + 2 &Delta;_
- For an unlabeled blossom or non-top-level blossom, the modified dual value is equal
to the true dual value.
These modified values are invariant under delta steps.
Thus, there is no need to update the stored values during a delta step.
Since the modified blossom dual value depends on the label (S or T) of the blossom,
the modified value must be updated whenever the label of the blossom changes.
This update can be done in constant time, and changing the label of a blossom is
in any case an explicit step, so this won't increase the asymptotic run time.
For each vertex, rather than storing its true dual value, we store a _modified_ dual value:
- For an S-vertex, the modified dual value is _u'<sub>x</sub> = u<sub>x</sub> + &Delta;_
- For a T-vertex, the modified dual value is _u'<sub>x</sub> = u<sub>x</sub> - offset<sub>B(x)</sub> - &Delta;_
- For an unlabeled vertex, the modified dual value is _u'<sub>x</sub> = u<sub>x</sub> - offset<sub>B(x)</sub>_
where _offset<sub>B</sub>_ is an extra variable which is maintained for each top-level blossom.
Again, the modified values are invariant under delta steps, which implies that no update
to the stored values is necessary during a delta step.
Since the modified vertex dual value depends on the label (S or T) of its top-level blossom,
an update is necessary when that label changes.
For S-vertices, we can afford to apply that update directly to the vertices involved.
This is possible since a vertex becomes an S-vertex at most once per stage.
The situation is more complicated for T-vertices.
During a stage, a T-vertex can become unlabeled if it is part of a T-blossom that gets expanded.
The same vertex can again become a T-vertex, then again become unlabeled during a subsequent
blossom expansion.
In this way, a vertex can transition between T-vertex and unlabeled vertex up to _O(n)_ times
within a stage.
We can not afford to update the stored modified vertex dual so many times.
This is where the _offset_ variables come in.
If a blossom becomes a T-blossom, rather than updating the modified duals of all vertices,
we update only the _offset_ variable of the blossom such that the modified vertex duals
remain unchanged.
If a blossom is expanded, we push the _offset_ values down to its sub-blossoms.
### Efficiently computing _&delta;_
To perform a delta step, the algorithm computes the values
of _&delta;<sub>1</sub>_, _&delta;<sub>2</sub>_, _&delta;<sub>3</sub>_ and _&delta;<sub>4</sub>_ of _&delta;<sub>1</sub>_, _&delta;<sub>2</sub>_, _&delta;<sub>3</sub>_ and _&delta;<sub>4</sub>_
and determine which edge (_&delta;<sub>2</sub>_, _&delta;<sub>3</sub>_) or and determines which edge (_&delta;<sub>2</sub>_, _&delta;<sub>3</sub>_) or
blossom (_&delta;<sub>4</sub>_) achieves the minimum value. blossom (_&delta;<sub>4</sub>_) achieves the minimum value.
The total number of dual updates during a matching may be _&Theta;(n<sup>2</sup>)_. A naive implementation might compute _&delta;_ by looping over the vertices, blossoms and edges
Since we want to limit the total number of steps of the matching algorithm to _O(n<sup>3</sup>)_, in the graph.
each dual update may use at most _O(n)_ steps. The total number of delta steps during a matching may be _&Theta;(n<sup>2</sup>)_,
pushing the total time for _&delta;_ calculations to _O(n<sup>2</sup> m)_ which is much too slow.
[[4]](#galil_micali_gabow1986) introduces a combination of data structures from which
the value of _&delta;_ can be computed efficiently.
We can find _&delta;<sub>1</sub>_ in _O(n)_ steps by simply looping over all vertices _&delta;<sub>1</sub>_ is the minimum dual value of any S-vertex.
and checking their dual variables. This value can be computed in constant time.
We can find _&delta;<sub>4</sub>_ in _O(n)_ steps by simply looping over all non-trivial blossoms The dual value of an unmatched vertex is reduced by _&delta;_ during every delta step.
(since there are fewer than _n_ non-trivial blossoms). Since all vertex duals start with the same dual value _u<sub>start</sub>_,
We could find _&delta;<sub>2</sub>_ and _&delta;<sub>3</sub>_ by simply looping over all unmatched vertices have dual value _&delta;<sub>1</sub> = u<sub>start</sub> - &Delta;_.
all edges of the graph in _O(m)_ steps, but that exceeds our budget of _O(n)_ steps.
So we need better techniques.
For _&delta;<sub>2</sub>_, we determine the least-slack edge between an S-blossom and unlabeled _&delta;<sub>3</sub>_ is half of the minimum slack of any edge between two different S-blossoms.
blossom as follows. To compute this efficiently, we keep edges between S-blossoms in a priority queue.
For every vertex _y_ in any unlabeled blossom, keep track of _e<sub>y</sub>_: The edges are inserted into the queue during scanning of newly labeled S-vertices.
the least-slack edge that connects _y_ to any S-vertex. To compute _&delta;<sub>3</sub>_, we simply find the minimum-priority element of the queue.
The thing to keep track of is the identity of the edge, not the slack value.
This information is kept up-to-date as part of the procedure that considers S-vertices.
The scanning procedure eventually considers all edges _(x, y, w)_ where _x_ is an S-vertex.
At that moment _e<sub>y</sub>_ is updated if the new edge has lower slack.
Calculating _&delta;<sub>2</sub>_ then becomes a matter of looping over all vertices _x_, A complication may occur when a new blossom is created.
checking whether _B(x)_ is unlabeled and calculating the slack of _e<sub>x</sub>_. Edges that connect different top-level S-blossoms before creation of the new blossom,
may end up as internal edges inside the newly created blossom.
This implies that such edges would have to be removed from the _&delta;<sub>3</sub>_ priority queue,
but that would be quite difficult.
Instead, we just let those edges stay in the queue.
When computing the value of _&delta;<sub>3</sub>_, we thus have to check whether the minimum
element represents an edge between different top-level blossoms.
If not, we discard such stale elements until we find an edge that does.
One subtle aspect of this technique is that a T-vertex can loose its label when A complication occurs when dual variables are updated.
the containing T-blossom gets expanded. At that point, the slack of any edge between different S-blossoms decreases by _2\*&delta;_.
At that point, we suddenly need to have kept track of its least-slack edge to any S-vertex. But we can not afford to update the priorities of all elements in the queue.
It is therefore necessary to do the same tracking also for T-vertices. To solve this, we set the priority of each edge to its _modified slack_.
So the technique is: For any vertex that is not an S-vertex, track its least-slack edge
to any S-vertex.
Another subtle aspect is that a T-vertex may have a zero-slack edge to an S-vertex. The _modified slack_ of an edge is defined as follows:
Even though these edges are already tight, they must still be tracked.
If the T-vertex loses its label, this edge needs to be reconsidered by the scanning procedure.
By including these edges in least-slack edge tracking, they will be rediscovered
through a _&delta;<sub>2</sub>=0_ update after the vertex becomes unlabeled.
For _&delta;<sub>3</sub>_, we determine the least-slack edge between any pair of S-blossoms $$ \pi'_{x,y} = u'_x + u'_y - w_{x,y} $$
as follows.
For every S-blossom _B_, keep track of _e<sub>B</sub>_:
the least-slack edge between _B_ and any other S-blossom.
Note that this is done per S-blossoms, not per S-vertex.
This information is kept up-to-date as part of the procedure that considers S-vertices.
Calculating _&delta;<sub>3</sub>_ then becomes a matter of looping over all S-blossoms _B_
and calculating the slack of _e<sub>B</sub>_.
A complication occurs when S-blossoms are merged. The modified slack is computed in the same way as true slack, except it uses
Some of the least-slack edges of the sub-blossoms may be internal edges in the merged blossom, the modified vertex duals instead of true vertex duals.
and therefore irrelevant for _&delta;<sub>3</sub>_. Blossom duals are ignored since we will never compute the modified slack of an edge that
As a result, the proper _e<sub>B</sub>_ of the merged blossom may be different from all is contained inside a blossom.
least-slack edges of its sub-blossoms. Because modified vertex duals are invariant under delta steps, so is the modified edge slack.
An additional data structure is needed to find _e<sub>B</sub>_ of the merged blossom. As a result, the priorities of edges in the priority queue remain unchanged during a delta step.
For every S-blossom _B_, maintain a list _L<sub>B</sub>_ of edges between _B_ and _&delta;<sub>4</sub>_ is half of the minimum dual variable of any T-blossom.
other S-blossoms. To compute this efficiently, we keep non-trivial T-blossoms in a priority queue.
The purpose of _L<sub>B</sub>_ is to contain, for every other S-blossom, the least-slack edge The blossoms are inserted into the queue when they become a T-blossom and removed from
between _B_ and that blossom. the queue when they stop being a T-blossom.
These lists are kept up-to-date as part of the procedure that considers S-vertices.
While considering vertex _x_, if edge _(x, y, w)_ has positive slack,
and _B(y)_ is an S-blossom, the edge is inserted in _L<sub>B(x)</sub>_.
This may cause _L<sub>B(x)</sub>_ to contain multiple edges to _B(y)_.
That's okay as long as it definitely contains the least-slack edge to _B(y)_.
When a new S-blossom is created, form its list _L<sub>B</sub>_ by merging the lists A complication occurs when dual variables are updated.
of its sub-blossoms. At that point, the dual variable of any T-blossom decreases by _2\*&delta;_.
Ignore edges that are internal to the merged blossom. But we can not afford to update the priorities of all elements in the queue.
If there are multiple edges to the same target blossom, keep only the least-slack of these edges. To solve this, we set the priority of each blossom to its _modified_ dual value
Then find _e<sub>B</sub>_ of the merged blossom by simply taking the least-slack edge _z'<sub>B</sub> = z<sub>B</sub> + 2\*&Delta;_.
out of _L<sub>B</sub>_. These values are invariant under delta steps.
_&delta;<sub>2</sub>_ is the minimum slack of any edge between an S-vertex and unlabeled vertex.
To compute this efficiently, we use a fairly complicated strategy that involves
three levels of priority queues.
At the lowest level, every T-vertex or unlabeled vertex maintains a separate priority queue
of edges between itself and any S-vertex.
Edges are inserted into this queue during scanning of newly labeled S-vertices.
Note that S-vertices do not maintain this type of queue.
The priorities of edges in these queues are set to their _modified slack_.
This ensures that the priorities remain unchanged during delta steps.
The priorities also remain unchanged when the T-vertex becomes unlabeled or the unlabeled
vertex becomes a T-vertex.
At the middle level, every T-blossom or unlabeled top-level blossom maintains a priority queue
containing its vertices.
This is in fact the _concatenable priority queue_ that is maintained by every top-level blossom
as was described earlier in this document.
The priority of each vertex in the mid-level queue is set to the minimum priority of any edge
in the low-level queue of that vertex.
If edges are added to (or removed from) the low-level queue, the priority of the corresponding
vertex in the mid-level queue may change.
If the low-level queue of a vertex is empty, that vertex has priority _Infinity_
in the mid-level queue.
At the highest level, unlabeled top-level blossoms are tracked in one global priority queue.
The priority of each blossom in this queue is set to the minimum slack of any edge
from that blossom to an S-vertex plus _&Delta;_.
These priorities are invariant under delta steps.
To compute _&delta;<sub>2</sub>_, we find the minimum priority in the high-level queue
and adjust it by _&Delta;_.
To find the edge associated with _&delta;<sub>2</sub>_,
we use the high-level queue to find the unlabeled blossom with minimum priority,
then use that blossom's mid-level queue to find the vertex with minimum priority,
then use that vertex's low-level queue to find the edge with minimum priority.
The whole thing is a bit tricky, but it works.
### Re-using alternating trees
According to [[5]](#galil1986), labels and alternating trees should be erased at the end of each stage.
However, the algorithm can be optimized by keeping some of the labels and re-using them
in the next stage.
The optimized algorithm erases _only_ the two alternating trees that are part of
the augmenting path.
All blossoms in those two trees lose their labels and become free blossoms again.
Other alternating trees, which are not involved in the augmenting path, are preserved
into the next stage, and so are the labels on the blossoms in those trees.
This optimization is well known and is described for example in [[7]](#mehlhorn_schafer2002).
It does not affect the worst-case asymptotic run time of the algorithm,
but it provides a significant practical speed up for many types of graphs.
Erasing alternating trees is easy enough, but selectively stripping labels off blossoms
has a few implications.
The blossoms that lose their labels need to have their modified dual values updated.
The T-blossoms additionally need to have their _offset<sub>B</sub>_ variables updated
to keep the vertex dual values consistent.
For S-blossoms that lose their labels, the modified vertex dual variables are updated directly.
The various priority queues also need updating.
Former T-blossoms must be removed from the priority queue for _&delta;<sub>4</sub>_.
Edges incident on former S-vertices must be removed from the priority queues for _&delta;<sub>3</sub>_ and _&delta;<sub>2</sub>_.
Finally, S-vertices that become unlabeled need to construct a proper priority queue
of incident edges to other S-vertices for _&delta;<sub>2</sub>_ tracking.
This involves visiting every incident edge of every vertex in each S-blossom that loses its label.
## Run time of the algorithm ## Run time of the algorithm
Every stage of the algorithm either increases the number of matched vertices by 2 or Every stage of the algorithm either increases the number of matched vertices by 2 or
ends the matching. ends the matching.
Therefore the number of stages is at most _n/2_. Therefore the number of stages is at most _n/2_.
Every stage runs in _O(n<sup>2</sup>)_ steps, therefore the complete algorithm runs in Every stage runs in time _O((n + m) log n)_, therefore the complete algorithm runs in
_O(n<sup>3</sup>)_ steps. time _O(n (n + m) log n)_.
During each stage, edge scanning is driven by the queue _Q_.
Every vertex enters _Q_ at most once.
Each vertex that enters _Q_ has its incident edges scanned, causing every edge in the graph
to be scanned at most twice per stage.
Scanning an edge is done in constant time, unless it leads to the discovery of a blossom
or an augmenting path, which will be separately accounted for.
Therefore edge scanning needs _O(m)_ steps per stage.
Creating a blossom reduces the number of top-level blossoms by at least 2, Creating a blossom reduces the number of top-level blossoms by at least 2,
thus limiting the number of simultaneously existing blossoms to _O(n)_. thus limiting the number of simultaneously existing blossoms to _O(n)_.
Blossoms that are created during a stage become S-blossoms and survive until the end of that stage, Blossoms that are created during a stage become S-blossoms and survive until the end of that stage,
therefore _O(n)_ blossoms are created during a stage. therefore _O(n)_ blossoms are created during a stage.
Creating a blossom with _k_ sub-blossoms reduces the number of top-level blossoms by _k-1_,
thus limiting the total number of sub-blossoms that can be involved in blossom creation
during a stage to _O(n)_.
Creating a blossom involves tracing the alternating path to the closest common ancestor, Creating a blossom involves tracing the alternating path to the closest common ancestor,
and some bookkeeping per sub-blossom, which takes time _O(k log n)_ for a blossom with _k_ sub-blossoms.
and inserting new S-vertices _Q_, It also involves bookkeeping per sub-blossom, which takes time _O(log n)_ per sub-blossom.
all of which can be done in _O(n)_ steps per blossom creation. It also involves relabeling former T-vertices as S-vertices, but I account for that
The cost of managing least-slack edges between S-blossoms will be separately accounted for. time separately below so I can ignore it here.
Therefore blossom creation needs _O(n<sup>2</sup>)_ steps per stage It also involves merging the concatenable queues which track the vertices in top-level blossoms.
(excluding least-slack edge management). Merging two queues takes time _O(log n)_, therefore merging the queues of all sub-blossoms
takes time _O(k log n)_.
Creating a blossom thus takes time _O(k log n)_.
Blossom creation thus takes total time _O(n log n)_ per stage.
As part of creating a blossom, a list _L<sub>B</sub>_ of least-slack edges must be formed. During each stage, a blossom becomes an S-blossom or T-blossom at most once.
This involves processing every element of all least-slack edge lists of its sub-blossoms, A blossom also becomes unlabeled at most once, at the end of the stage.
and removing redundant edges from the merged list. Changing the label of a blossom takes some simple bookkeeping, as well as operations
Merging and removing redundant edges can be done in one sweep via a temporary array indexed on priority queues (_&delta;<sub>4</sub>_ for T-blossoms, _&delta;<sub>2</sub>_ for unlabeled
by target blossom. blossoms) which take time _O(log n)_ per blossom.
Collect the least-slack edges of the sub-blossoms into this array, Assigning label S or removing label S also involves work for the vertices in the blossom
indexed by the target blossom of the edge, and their edges, but I account for that time separately below so I can ignore it here.
keeping only the edge with lowest slack per target blossom. Blossom labeling thus takes total time _O(n log n)_ per stage.
Then convert the array back into a list by removing unused indices.
This takes _O(1)_ steps per candidate edge, plus _O(n)_ steps to manage the temporary array.
I choose to shift the cost of collecting the candidate edges from the sub-blossoms to
the actions that inserted those edges into the sub-blossom lists.
There are two processes which insert edges into _L<sub>B</sub>_: edge scanning and blossom
creation.
Edge scanning inserts each graph edge at most twice per stage for a total cost of _O(m)_ steps
per stage.
Blossom creation inserts at most _O(n)_ edges per blossom creation.
Therefore the total cost of S-blossom least-slack edge management is
_O(m + n<sup>2</sup>) = O(n<sup>2</sup>)_ steps per stage.
The number of blossom expansions during a stage is _O(n)_. During each stage, a vertex becomes an S-vertex at most once, and an S-vertex becomes
Expanding a blossom involves some bookkeeping per sub-blossom, unlabeled at most once.
and reconstructing the alternating path through the blossom, In both cases, the incident edges of the affected vertex are scanned and potentially
and inserting any new S-vertices into _Q_, added to or removed from priority queues.
all of which can be done in _O(n)_ steps per blossom. This involves finding the top-level blossoms of the endpoints of each edge, which
Therefore blossom expansion needs _O(n<sup>2</sup>)_ steps per stage. takes time _O(log n)_ per edge.
The updates to priority queues also take time _O(log n)_ per edge.
Edge scanning thus takes total time _O(m log n)_ per stage.
Note that _m &le; n<sup>2</sup>_ therefore _log m &le; 2 log n_.
This implies that an operation on a priority queue with _m_ elements takes time _O(log n)_.
Expanding a blossom involves some bookkeeping which takes time _O(log n)_ per sub-blossom.
It also involves splitting the concatenable queue that tracks the vertices in top-level blossoms,
which takes time _O(log n)_ per sub-blossom.
In case of a T-blossom, it also involves reconstructing the alternating path through
the blossom which takes time _O(k log n)_ for _k_ sub-blossoms.
Also in case of a T-blossom, some sub-blossoms will become S-blossoms and their
vertices become S-vertices, but I have already accounted for that cost above
so I can ignore it here.
Expanding a blossom thus takes time _O(k log n)_.
Any blossom is involved as a sub-blossom in an expanding blossom at most once per stage.
Blossom expansion thus takes total time _O(n log n)_ per stage.
The length of an augmenting path is _O(n)_. The length of an augmenting path is _O(n)_.
Tracing the augmenting path and augmenting the matching along the path can be done in _O(n)_ steps. Tracing the augmenting path and augmenting the matching along the path can be done
Augmenting through a blossom takes a number of steps that is proportional in the number of in _O(n log n)_ steps.
Augmenting through a blossom takes a number of steps that is proportional to the number of
its sub-blossoms. its sub-blossoms.
Since there are fewer than _n_ non-trivial blossoms, the total cost of augmenting through Since there are fewer than _n_ non-trivial blossoms, the total cost of augmenting through
blossoms is _O(n)_ steps. blossoms is _O(n)_ steps.
Therefore the total cost of augmenting is _O(n)_ steps per stage. Augmenting thus takes total time _O(n log n)_ per stage.
A dual variable update limited by _&delta;<sub>1</sub>_ ends the algorithm and therefore A delta step limited by _&delta;<sub>1</sub>_ ends the algorithm and therefore happens at most once.
happens at most once. A _&delta;<sub>2</sub>_ step assigns a label to a previously unlabeled blossom and therefore
An update limited by _&delta;<sub>2</sub>_ labels a previously labeled blossom happens _O(n)_ times per stage.
and therefore happens _O(n)_ times per stage. A _&delta;<sub>3</sub>_ step either creates a blossom or finds an augmenting path and therefore
An update limited by _&delta;<sub>3</sub>_ either creates a blossom or finds an augmenting path happens _O(n)_ times per stage.
and therefore happens _O(n)_ times per stage. A _&delta;<sub>4</sub>_ step expands a blossom and therefore happens _O(n)_ times per stage.
An update limited by _&delta;<sub>4</sub>_ expands a blossom and therefore happens Therefore the number of delta steps is _O(n)_ per stage.
_O(n)_ times per stage.
Therefore the number of dual variable updates is _O(n)_ per stage.
The cost of calculating the _&delta;_ values is _O(n)_ per update as discussed above.
Applying changes to the dual variables can be done by looping over all vertices and looping over
all top-level blossoms in _O(n)_ steps per update.
Therefore the total cost of dual variable updates is _O(n<sup>2</sup>)_ per stage.
Calculating _&delta;<sub>1</sub>_ takes constant time.
Calculating _&delta;<sub>2</sub>_ and _&delta;<sub>4</sub>_ requires a constant number
of lookups in priority queues which takes time _O(log n)_ per delta step.
During calculation of _&delta;<sub>3</sub>_, it may be necessary to remove stage edges
from the priority queue.
Since every edge is inserted into the priority queue at most once per stage,
at most _O(m)_ edges are removed per stage, which takes total time _O(m log n)_ per stage.
Calculation of _&delta;_ thus takes total time _O((n + m) log n)_ per stage.
Applying updates to dual variables is done in a lazy fashion as discussed above.
The only variable that is updated directly is _&Delta;_, which takes time _O(1)_ per delta step.
Updating dual variables thus takes total time _O(n)_ per stage.
## Implementation details ## Implementation details
@ -819,17 +1006,100 @@ Every edge therefore appears in two adjacency lists.
These data structures are initialized at the start of the matching algorithm These data structures are initialized at the start of the matching algorithm
and never change while the algorithm runs. and never change while the algorithm runs.
#### Priority queue
Priority queues are used for a number of purposes:
- a priority queue to find the least-slack edge between S-blossoms;
- a priority queue to find the minimum-dual T-blossom;
- a priority queue to find the unlabeled blossom with least-slack edge to an S-blossom;
- a separate priority queue per vertex to find the least-slack edge between that vertex
and any S-vertex.
These queues are implemented as a binary heaps.
This type of queue supports the following operations:
- _insert_ a new element with specified priority in time _O(log n)_;
- find the element with _minimum_ priority in time _O(1)_;
- _delete_ a specified element in time _O(log n)_;
- _change_ the priority of a specified element in time _O(log n)_.
#### Concatenable priority queue
Each top-level blossom maintains a concatenable priority queue containing its vertices.
We use a specific type of concatenable queue that supports the following operations
[[4]](#galil_micali_gabow1986) [[8]](#aho_hopcroft_ullman1974):
- _create_ a new queue containing 1 new element;
- find the element with _minimum_ priority in time _O(1)_;
- _change_ the priority of a given element in time _O(log n)_;
- _merge_ two queues into one new queue in time _O(log n)_;
- _split_ a queue, thus undoing the previous _merge_ step in time _O(log n)_.
The efficient _merge_ and _split_ operations make it possible to adapt the queue during
blossom creation and blossom expansion steps.
The priorities in the queue are used to find, for a given top-level blossom, its vertex
with least-slack edge to an S-blossom.
The concatenable queue is implemented as a balanced tree, specifically a _2-3 tree_.
Each internal node of a 2-3 tree has either 2 or 3 children.
The leaf nodes of the tree represent the elements of the queue.
All leaf nodes are at the same distance from the root.
Each node has a pointer to its parent, and each internal node has pointers to its children.
Each internal node also stores its height (distance to its leaf nodes).
Only leaf nodes have a priority.
However, each internal node maintains a pointer to the leaf node with minimum priority
within its subtree.
As a consequence, the root of the tree has a pointer to the least-priority element in the queue.
To keep this information consistent, any change in the priority of a leaf node must
be followed by updating the internal nodes along a path from the leaf node to the root.
The same must be done when the structure of the tree is adjusted.
The left-to-right order of the leaf nodes is preserved during all operations, including _merge_
and _split_.
When trees _A_ and _B_ are merged, the sequence of leaf nodes in the merged tree will consist of
the leaf nodes of _A_ followed by the leaf nodes of _B_.
Note that the left-to-right order of the leaf nodes is unrelated to the priorities of the elements.
To merge two trees, the root of the smaller tree is inserted as a child of an appropriate node
in the larger tree.
Subsequent adjustments are needed restore the consistency the 2-3 tree and to update
the minimum-element pointers of the internal nodes along a path from the insertion point
to the root of the merged tree.
This can be done in a number of steps proportional to the difference in height between
the two trees, which is in any case _O(log n)_.
To split a tree, a _split node_ is identified: the left-most leaf node that must end up
in the right-side tree after splitting.
Internal nodes are deleted along the path from the _split node_ to the root of the tree.
This creates a forest of disconnected subtrees on the left side of the path,
and a similar forest of subtrees on the right side of the split path.
The left-side subtrees are reassembled into a single tree through a series of _merge_ steps.
Although _O(log n)_ merge steps may be needed, the total time required for reassembly
is also _O(log n)_ because each merge step combines trees of similar height.
A similar reassembly is done for the forest on the right side of the split path.
The concatenable queues have an additional purpose in the matching algorithm:
finding the top-level blossom that contains a given vertex.
To do this, we assign a _name_ to each concatenable queue instance, which is simply
a pointer to the top-level blossom that maintains the queue.
An extra operation is defined:
_find_ the name of the queue instance that contains a given element in time _O(log n)_.
Implementing the _find_ operation is easy:
Starting at the leaf node that represents the element, follow _parent_ pointers
to the root of the tree.
The root node contains the name of the queue.
#### General data #### General data
`vertex_mate[x] = y` if the edge between vertex _x_ and vertex _y_ is matched. <br> `vertex_mate[x] = y` if the edge between vertex _x_ and vertex _y_ is matched. <br>
`vertex_mate[x] = -1` if vertex _x_ is unmatched. `vertex_mate[x] = -1` if vertex _x_ is unmatched.
`vertex_top_blossom[x] =` pointer to _B(x)_, the top-level blossom that contains vertex _x_. `vertex_dual[x]` holds the modified vertex dual _u'<sub>x</sub>_.
`vertex_dual[x]` holds the value of _u<sub>x</sub>_. A global list holds vertex indices of S-vertices whose edges have not yet been scanned.
Vertices are inserted in this list when their top-level blossom gets label S.
A FIFO queue holds vertex indices of S-vertices whose edges have not yet been scanned.
Vertices are inserted in this queue as soon as their top-level blossom gets label S.
#### Blossoms #### Blossoms
@ -842,56 +1112,28 @@ Both types of blossoms are represented as class instances with the following att
* `B.label` is `S` or `T` or `None` * `B.label` is `S` or `T` or `None`
* `B.tree_edge = (x, y)` if _B_ is a labeled top-level blossom, where _y_ is a vertex in _B_ * `B.tree_edge = (x, y)` if _B_ is a labeled top-level blossom, where _y_ is a vertex in _B_
and _(x, y)_ is the edge that links _B_ to its parent in the alternating tree. and _(x, y)_ is the edge that links _B_ to its parent in the alternating tree.
* `B.tree_blossoms` points to a list of blossoms in the same alternating tree, if _B_
is a labeled top-level blossom.
* `B.vertex_dual_offset` holds the pending change to vertex duals _offset<sub>B</sub>_.
A non-trivial blossom additionally has the following attributes: A non-trivial blossom additionally has the following attributes:
* `B.subblossoms` is an array of pointers to the sub-blossoms of _B_, * `B.subblossoms` is an array of pointers to the sub-blossoms of _B_,
starting with the sub-blossom that contains the base vertex. starting with the sub-blossom that contains the base vertex.
* `B.edges` is an array of alternating edges connecting the sub-blossoms. * `B.edges` is an array of alternating edges connecting the sub-blossoms.
* `B.dual_var` holds the value of _z<sub>B</sub>_. * `B.dual_var` holds the modified blossom dual _z'<sub>B</sub>_.
Single-vertex blossoms are kept in an array indexed by vertex index. <br> Single-vertex blossoms are kept in an array indexed by vertex index. <br>
Non-trivial blossoms are kept in a separate array. <br> Non-trivial blossoms are kept in a separate list. <br>
These arrays are used to iterate over blossoms and to find the trivial blossom These arrays are used to iterate over blossoms and to find the trivial blossom
that consists of a given vertex. that consists of a given vertex.
#### Least-slack edge tracking
`vertex_best_edge[x]` is an array holding _e<sub>x</sub>_, the edge index of
the least-slack edge between vertex _x_ and any S-vertex, or -1 if there is no such edge.
This value is only meaningful if _x_ is a T-vertex or unlabeled vertex.
`B.best_edge` is a blossom attribute holding _e<sub>B</sub>_, the edge index of the least-slack
edge between blossom _B_ and any other S-blossom, or -1 if there is no such edge.
This value is only meaningful if _B_ is a top-level S-blossom.
For non-trivial S-blossoms _B_, attribute `B.best_edge_set` holds the list _L<sub>B</sub>_
of potential least-slack edges to other blossoms.
This list is not maintained for single-vertex blossoms, since _L<sub>B</sub>_ of a single vertex
can be efficiently constructed from its adjacency list.
#### Memory usage #### Memory usage
The data structures described above use a constant amount of memory per vertex and per edge The data structures described above use a constant amount of memory per vertex and per edge
and per blossom. and per blossom.
Therefore the total memory requirement is _O(m + n)_. Therefore the total memory requirement is _O(m + n)_.
The memory usage of _L<sub>B</sub>_ is a little tricky.
Any given list _L<sub>B</sub>_ can have length _O(n)_, and _O(n)_ of these lists can exist
simultaneously.
Naively allocating space for _O(n)_ elements per list will drive memory usage
up to _O(n<sup>2</sup>)_.
However, at any moment, an edge can be in at most two of these lists, therefore the sum
of the lengths of these lists is limited to _O(m)_.
A possible solution is to implement the _L<sub>B</sub>_ as linked lists.
### Performance critical routines
Calculations that happen very frequently in the algorithm are:
determining the top-level blossom of a given vertex, and calculating the slack of a given edge.
These calculations must run in constant time per call in any case, but it makes sense to put
some extra effort into making these calculations _fast_.
### Recursion ### Recursion
Certain tasks in the algorithm are recursive in nature: Certain tasks in the algorithm are recursive in nature:
@ -948,59 +1190,41 @@ Proof by induction that all vertex duals are multiples of 0.5 and all blossom du
- Blossom duals increase or decrease by _2\*&delta;_, - Blossom duals increase or decrease by _2\*&delta;_,
therefore updated blossom duals are still integers. therefore updated blossom duals are still integers.
The value of vertex dual variables and blossom dual variables never exceeds the It is useful to know that (modified) dual variables and (modified) edge slacks
greatest edge weight in the graph. are limited to a finite range of values which depends on the maximum edge weight.
This may be helpful for choosing an integer data type for the dual variables. This may be helpful when choosing an integer data type for these variables.
(Alternatively, choose a programming language with unlimited integer range. (Alternatively, choose a programming language with unlimited integer range.
This is perhaps the thing I love most about Python.) This is perhaps the thing I love most about Python.)
Proof that dual variables do not exceed _max-weight_: - The value of _&Delta;_ (sum over _&delta;_ steps) does not exceed _maxweight / 2_.
Proof:
- Vertex dual variables start at _u<sub>x</sub> = 0.5\*max-weight_. - Vertex dual variables start at _u<sub>x</sub> = maxweight_ / 2.
- While the algorithm runs, there is at least one vertex which has been unmatched - While the algorithm runs, there is at least one vertex which has been unmatched
since the beginning. since the beginning.
This vertex has always had label S, therefore its dual always decreased by _&delta;_ This vertex has always had label S, therefore its dual is _maxweight/2 - &Delta;_.
during a dual variable update. Vertex deltas can not be negative, therefore _&Delta; &le; maxweight/2_.
Since it started at _0.5\*max-weight_ and can not become negative, - Vertex duals are limited to the range 0 to _maxweight_.
the sum of _&delta;_ over all dual variable updates can not exceed _0.5\*max-weight_. - Blossom duals are limited to the range 0 to _maxweight_.
- Vertex dual variables increase by at most _&delta;_ per update. - Edge slack is limited to the range 0 to _2\*maxweight_.
Therefore no vertex dual can increase by more than _0.5\*max-weight_ in total. - Modified vertex duals are limited to the range 0 to _1.5\*maxweight_.
Therefore no vertex dual can exceed _max-weight_. - Modified blossom duals are limited to the range _-maxweight to 2\*maxweight_.
- Blossom dual variables start at _z<sub>B</sub> = 0_. - Modified edge slack is limited to the range 0 to _3\*maxweight_.
- Blossom dual variables increase by at most _2\*&delta;_ per update. - Dual offsets are limited to the range _-maxweight/2_ to _maxweight/2_.
Therefore no blossom dual can increase by more than _max-weight_ in total.
Therefore no blossom dual can exceed _max-weight_.
### Handling floating point edge weights ### Handling floating point edge weights
Floating point calculations are subject to rounding errors. Floating point calculations are subject to rounding errors.
This has two consequences for the matching algorithm: As a result, the algorithm may return a matching which has slightly lower weight than
the actual maximum weight.
- The algorithm may return a matching which has slightly lower weight than The algorithm will allways return a valid matching, even if rounding errors occur.
the actual maximum weight. Floating point comparisons affect which actions are taken during delta steps,
and thus eventually determine which edges are matched.
But the overall structure of the algorithm guarantees that it will eventually return
a valid (if possibly suboptimal) matching.
- The algorithm may not reliably recognize tight edges. The most challenging cases are probably graphs with edge weights that differ by many orders
To check whether an edge is tight, its slack is compared to zero. of magnitude.
Rounding errors may cause the slack to appear positive even when an exact calculation
would show it to be zero.
The slack of some edges may even become slightly negative.
I believe this does not affect the correctness of the algorithm.
An edge that should be tight but is not recognized as tight due to rounding errors,
can be pulled tight through an additional dual variable update.
As side-effect of this update, the edge will immediately be used to grow the alternating tree,
or construct a blossom or augmenting path.
This mechanism allows the algorithm to make progress, even if slack comparisons
are repeatedly thrown off by rounding errors.
Rounding errors may cause the algorithm to perform more dual variable updates
than strictly necessary.
But this will still not cause the run time of the algorithm to exceed _O(n<sup>3</sup>)_.
It seems to me that the matching algorithm is stable for floating point weights.
And it seems to me that it returns a matching which is close to optimal,
and could have been optimal if edge weights were changed by small amounts.
I must admit these arguments are mostly based on intuition.
Unfortunately I don't know how to properly analyze the floating point accuracy of this algorithm. Unfortunately I don't know how to properly analyze the floating point accuracy of this algorithm.
### Finding a maximum weight matching out of all maximum cardinality matchings ### Finding a maximum weight matching out of all maximum cardinality matchings
@ -1057,7 +1281,14 @@ changing all weights by the same amount doesn't change which of these matchings
([link](https://dl.acm.org/doi/abs/10.5555/320176.320229)) ([link](https://dl.acm.org/doi/abs/10.5555/320176.320229))
([pdf](https://dl.acm.org/doi/pdf/10.5555/320176.320229)) ([pdf](https://dl.acm.org/doi/pdf/10.5555/320176.320229))
7. <a id="mehlhorn_schafer2002"></a>
Kurt Mehlhorn, Guido Schäfer, "Implementation of O(nm log(n)) Weighted Matchings in General Graphs: The Power of Data Structures", _Journal of Experimental Algorithmics vol. 7_, 2002.
([link](https://dl.acm.org/doi/10.1145/944618.944622))
8. <a id="aho_hopcroft_ullman1974"></a>
Alfred V. Aho, John E. Hopcroft, Jeffrey D. Ullman,
_The Design and Analysis of Computer Algorithms_, Addison-Wesley, 1974.
--- ---
Written in 2023 by Joris van Rantwijk. Written in 2023-2024 by Joris van Rantwijk.
This work is licensed under [CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0/). This work is licensed under [CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0/).

View File

@ -10,7 +10,7 @@ import math
from collections.abc import Sequence from collections.abc import Sequence
from typing import NamedTuple, Optional from typing import NamedTuple, Optional
from .datastruct import UnionFindQueue, PriorityQueue from .datastruct import ConcatenableQueue, PriorityQueue
def maximum_weight_matching( def maximum_weight_matching(
@ -391,9 +391,10 @@ class Blossom:
# all top-level blossoms in the tree. # all top-level blossoms in the tree.
self.tree_blossoms: Optional[set[Blossom]] = None self.tree_blossoms: Optional[set[Blossom]] = None
# Each top-level blossom maintains a union-find datastructure # Each top-level blossom maintains a concatenable queue containing
# containing all vertices in the blossom. # all vertices in the blossom.
self.vertex_set: UnionFindQueue[Blossom, int] = UnionFindQueue(self) self.vertex_queue: ConcatenableQueue[Blossom, int]
self.vertex_queue = ConcatenableQueue(self)
# If this is a top-level unlabeled blossom with an edge to an # If this is a top-level unlabeled blossom with an edge to an
# S-blossom, "delta2_node" is the corresponding node in the delta2 # S-blossom, "delta2_node" is the corresponding node in the delta2
@ -469,6 +470,9 @@ class NonTrivialBlossom(Blossom):
# The value of the dual variable changes through delta steps, # The value of the dual variable changes through delta steps,
# but these changes are implemented as lazy updates. # but these changes are implemented as lazy updates.
# #
# blossom.dual_var holds the modified blossom dual value.
# The modified blossom dual is invariant under delta steps.
#
# The true dual value of a top-level S-blossom is # The true dual value of a top-level S-blossom is
# blossom.dual_var + ctx.delta_sum_2x # blossom.dual_var + ctx.delta_sum_2x
# #
@ -478,7 +482,6 @@ class NonTrivialBlossom(Blossom):
# The true dual value of any other type of blossom is simply # The true dual value of any other type of blossom is simply
# blossom.dual_var # blossom.dual_var
# #
# Note that "dual_var" is invariant under delta steps.
self.dual_var: float = 0 self.dual_var: float = 0
# If this is a top-level T-blossom, # If this is a top-level T-blossom,
@ -553,11 +556,12 @@ class MatchingContext:
# Initially there are no non-trivial blossoms. # Initially there are no non-trivial blossoms.
self.nontrivial_blossom: set[NonTrivialBlossom] = set() self.nontrivial_blossom: set[NonTrivialBlossom] = set()
# "vertex_set_node[x]" represents the vertex "x" inside the # "vertex_queue_node[x]" represents the vertex "x" inside the
# union-find datastructure of its top-level blossom. # concatenable queue of its top-level blossom.
# #
# Initially, each vertex belongs to its own trivial top-level blossom. # Initially, each vertex belongs to its own trivial top-level blossom.
self.vertex_set_node = [b.vertex_set.insert(i, math.inf) self.vertex_queue_node = [
b.vertex_queue.insert(i, math.inf)
for (i, b) in enumerate(self.trivial_blossom)] for (i, b) in enumerate(self.trivial_blossom)]
# All vertex duals are initialized to half the maximum edge weight. # All vertex duals are initialized to half the maximum edge weight.
@ -573,6 +577,9 @@ class MatchingContext:
# The value of the dual variable changes through delta steps, # The value of the dual variable changes through delta steps,
# but these changes are implemented as lazy updates. # but these changes are implemented as lazy updates.
# #
# vertex_dual_2x[x] holds 2 times the modified vertex dual value of
# vertex "x". The modified vertex dual is invariant under delta steps.
#
# The true dual value of an S-vertex is # The true dual value of an S-vertex is
# (vertex_dual_2x[x] - delta_sum_2x) / 2 # (vertex_dual_2x[x] - delta_sum_2x) / 2
# #
@ -582,7 +589,6 @@ class MatchingContext:
# The true dual value of an unlabeled vertex is # The true dual value of an unlabeled vertex is
# (vertex_dual_2x[x] + B(x).vertex_dual_offset) / 2 # (vertex_dual_2x[x] + B(x).vertex_dual_offset) / 2
# #
# Note that "vertex_dual_2x" is invariant under delta steps.
self.vertex_dual_2x: list[float] self.vertex_dual_2x: list[float]
self.vertex_dual_2x = num_vertex * [self.start_vertex_dual_2x] self.vertex_dual_2x = num_vertex * [self.start_vertex_dual_2x]
@ -622,8 +628,19 @@ class MatchingContext:
for blossom in itertools.chain(self.trivial_blossom, for blossom in itertools.chain(self.trivial_blossom,
self.nontrivial_blossom): self.nontrivial_blossom):
blossom.parent = None blossom.parent = None
blossom.vertex_set.clear() blossom.vertex_queue.clear()
del blossom.vertex_set del blossom.vertex_queue
#
# Find top-level blossom:
#
def top_level_blossom(self, x: int) -> Blossom:
"""Find the top-level blossom that contains vertex "x".
This function takes time O(log(n)).
"""
return self.vertex_queue_node[x].find()
# #
# Least-slack edge tracking: # Least-slack edge tracking:
@ -635,14 +652,14 @@ class MatchingContext:
The pseudo-slack of an edge is related to its true slack, but The pseudo-slack of an edge is related to its true slack, but
adjusted in a way that makes it invariant under delta steps. adjusted in a way that makes it invariant under delta steps.
If the edge connects two S-vertices in different top-level blossoms, The true slack of an edge between to S-vertices in different
the true slack is the pseudo-slack minus 2 times the running sum top-level blossoms is
of delta steps. edge_pseudo_slack_2x(e) / 2 - delta_sum_2x
If the edge connects an S-vertex to an unlabeled vertex, The true slack of an edge between an S-vertex and an unlabeled
the true slack is the pseudo-slack minus the running sum of delta vertex "y" inside top-level blossom B(y) is
steps, plus the pending offset of the top-level blossom that contains (edge_pseudo_slack_2x(e)
the unlabeled vertex. - delta_sum_2x + B(y).vertex_dual_offset) / 2
""" """
(x, y, w) = self.graph.edges[e] (x, y, w) = self.graph.edges[e]
return self.vertex_dual_2x[x] + self.vertex_dual_2x[y] - 2 * w return self.vertex_dual_2x[x] + self.vertex_dual_2x[y] - 2 * w
@ -668,8 +685,8 @@ class MatchingContext:
if not improved: if not improved:
return return
# Update the priority of "y" in its UnionFindQueue. # Update the priority of "y" in its ConcatenableQueue.
self.vertex_set_node[y].set_prio(prio) self.vertex_queue_node[y].set_prio(prio)
# If the blossom is unlabeled and the new edge becomes its least-slack # If the blossom is unlabeled and the new edge becomes its least-slack
# S-edge, insert or update the blossom in the global delta2 queue. # S-edge, insert or update the blossom in the global delta2 queue.
@ -701,13 +718,13 @@ class MatchingContext:
else: else:
prio = vertex_sedge_queue.find_min().prio prio = vertex_sedge_queue.find_min().prio
# If necessary, update the priority of "y" in its UnionFindQueue. # If necessary, update priority of "y" in its ConcatenableQueue.
if prio > self.vertex_set_node[y].prio: if prio > self.vertex_queue_node[y].prio:
self.vertex_set_node[y].set_prio(prio) self.vertex_queue_node[y].set_prio(prio)
if by.label == LABEL_NONE: if by.label == LABEL_NONE:
# Update or delete the blossom in the global delta2 queue. # Update or delete the blossom in the global delta2 queue.
assert by.delta2_node is not None assert by.delta2_node is not None
prio = by.vertex_set.min_prio() prio = by.vertex_queue.min_prio()
if prio < math.inf: if prio < math.inf:
prio += by.vertex_dual_offset prio += by.vertex_dual_offset
if prio > by.delta2_node.prio: if prio > by.delta2_node.prio:
@ -727,7 +744,7 @@ class MatchingContext:
This function takes time O(log(n)). This function takes time O(log(n)).
""" """
assert blossom.delta2_node is None assert blossom.delta2_node is None
prio = blossom.vertex_set.min_prio() prio = blossom.vertex_queue.min_prio()
if prio < math.inf: if prio < math.inf:
prio += blossom.vertex_dual_offset prio += blossom.vertex_dual_offset
blossom.delta2_node = self.delta2_queue.insert(prio, blossom) blossom.delta2_node = self.delta2_queue.insert(prio, blossom)
@ -758,7 +775,7 @@ class MatchingContext:
self.vertex_sedge_queue[x].clear() self.vertex_sedge_queue[x].clear()
for e in self.graph.adjacent_edges[x]: for e in self.graph.adjacent_edges[x]:
self.vertex_sedge_node[e] = None self.vertex_sedge_node[e] = None
self.vertex_set_node[x].set_prio(math.inf) self.vertex_queue_node[x].set_prio(math.inf)
def delta2_get_min_edge(self) -> tuple[int, float]: def delta2_get_min_edge(self) -> tuple[int, float]:
"""Find the least-slack edge between any S-vertex and any unlabeled """Find the least-slack edge between any S-vertex and any unlabeled
@ -781,7 +798,7 @@ class MatchingContext:
assert blossom.parent is None assert blossom.parent is None
assert blossom.label == LABEL_NONE assert blossom.label == LABEL_NONE
x = blossom.vertex_set.min_elem() x = blossom.vertex_queue.min_elem()
e = self.vertex_sedge_queue[x].find_min().data e = self.vertex_sedge_queue[x].find_min().data
return (e, slack_2x) return (e, slack_2x)
@ -837,8 +854,8 @@ class MatchingContext:
delta3_node = self.delta3_queue.find_min() delta3_node = self.delta3_queue.find_min()
e = delta3_node.data e = delta3_node.data
(x, y, _w) = self.graph.edges[e] (x, y, _w) = self.graph.edges[e]
bx = self.vertex_set_node[x].find() bx = self.top_level_blossom(x)
by = self.vertex_set_node[y].find() by = self.top_level_blossom(y)
assert (bx.label == LABEL_S) and (by.label == LABEL_S) assert (bx.label == LABEL_S) and (by.label == LABEL_S)
if bx is not by: if bx is not by:
slack = delta3_node.prio - self.delta_sum_2x slack = delta3_node.prio - self.delta_sum_2x
@ -1000,7 +1017,7 @@ class MatchingContext:
# and vertex "x" is no longer an S-vertex. # and vertex "x" is no longer an S-vertex.
self.delta3_remove_edge(e) self.delta3_remove_edge(e)
by = self.vertex_set_node[y].find() by = self.top_level_blossom(y)
if by.label == LABEL_S: if by.label == LABEL_S:
# Edge "e" connects unlabeled vertex "x" to S-vertex "y". # Edge "e" connects unlabeled vertex "x" to S-vertex "y".
# It must be tracked for delta2 via vertex "x". # It must be tracked for delta2 via vertex "x".
@ -1120,7 +1137,7 @@ class MatchingContext:
while x != -1 or y != -1: while x != -1 or y != -1:
# Check if we found a common ancestor. # Check if we found a common ancestor.
bx = self.vertex_set_node[x].find() bx = self.top_level_blossom(x)
if bx.marker: if bx.marker:
first_common = bx first_common = bx
break break
@ -1148,8 +1165,8 @@ class MatchingContext:
# If we found a common ancestor, trim the paths so they end there. # If we found a common ancestor, trim the paths so they end there.
if first_common is not None: if first_common is not None:
assert self.vertex_set_node[xedges[-1][0]].find() is first_common assert self.top_level_blossom(xedges[-1][0]) is first_common
while (self.vertex_set_node[yedges[-1][0]].find() while (self.top_level_blossom(yedges[-1][0])
is not first_common): is not first_common):
yedges.pop() yedges.pop()
@ -1186,12 +1203,12 @@ class MatchingContext:
assert len(path.edges) >= 3 assert len(path.edges) >= 3
# Construct the list of sub-blossoms (current top-level blossoms). # Construct the list of sub-blossoms (current top-level blossoms).
subblossoms = [self.vertex_set_node[x].find() for (x, y) in path.edges] subblossoms = [self.top_level_blossom(x) for (x, y) in path.edges]
# Check that the path is cyclic. # Check that the path is cyclic.
# Note the path will not always start and end with the same _vertex_, # Note the path will not always start and end with the same _vertex_,
# but it must start and end in the same _blossom_. # but it must start and end in the same _blossom_.
subblossoms_next = [self.vertex_set_node[y].find() subblossoms_next = [self.top_level_blossom(y)
for (x, y) in path.edges] for (x, y) in path.edges]
assert subblossoms[0] == subblossoms_next[-1] assert subblossoms[0] == subblossoms_next[-1]
assert subblossoms[1:] == subblossoms_next[:-1] assert subblossoms[1:] == subblossoms_next[:-1]
@ -1235,8 +1252,8 @@ class MatchingContext:
sub.tree_blossoms = None sub.tree_blossoms = None
tree_blossoms.remove(sub) tree_blossoms.remove(sub)
# Merge union-find structures. # Merge concatenable queues.
blossom.vertex_set.merge([sub.vertex_set for sub in subblossoms]) blossom.vertex_queue.merge([sub.vertex_queue for sub in subblossoms])
@staticmethod @staticmethod
def find_path_through_blossom( def find_path_through_blossom(
@ -1280,8 +1297,8 @@ class MatchingContext:
# Remove blossom from the delta2 queue. # Remove blossom from the delta2 queue.
self.delta2_disable_blossom(blossom) self.delta2_disable_blossom(blossom)
# Split union-find structure. # Split concatenable queue.
blossom.vertex_set.split() blossom.vertex_queue.split()
# Prepare to push lazy delta updates down to the sub-blossoms. # Prepare to push lazy delta updates down to the sub-blossoms.
vertex_dual_offset = blossom.vertex_dual_offset vertex_dual_offset = blossom.vertex_dual_offset
@ -1298,7 +1315,7 @@ class MatchingContext:
self.delta2_enable_blossom(sub) self.delta2_enable_blossom(sub)
# Avoid leaking a reference cycle. # Avoid leaking a reference cycle.
del blossom.vertex_set del blossom.vertex_queue
# Delete the expanded blossom. # Delete the expanded blossom.
self.nontrivial_blossom.remove(blossom) self.nontrivial_blossom.remove(blossom)
@ -1334,7 +1351,7 @@ class MatchingContext:
# the alternating tree. # the alternating tree.
assert blossom.tree_edge is not None assert blossom.tree_edge is not None
(x, y) = blossom.tree_edge (x, y) = blossom.tree_edge
sub = self.vertex_set_node[y].find() sub = self.top_level_blossom(y)
# Assign label T to that sub-blossom. # Assign label T to that sub-blossom.
self.assign_blossom_label_t(sub) self.assign_blossom_label_t(sub)
@ -1472,14 +1489,14 @@ class MatchingContext:
def augment_matching(self, path: AlternatingPath) -> None: def augment_matching(self, path: AlternatingPath) -> None:
"""Augment the matching through the specified augmenting path. """Augment the matching through the specified augmenting path.
This function takes time O(n). This function takes time O(n * log(n)).
""" """
# Check that the augmenting path starts and ends in # Check that the augmenting path starts and ends in
# an unmatched vertex or a blossom with unmatched base. # an unmatched vertex or a blossom with unmatched base.
assert len(path.edges) % 2 == 1 assert len(path.edges) % 2 == 1
for x in (path.edges[0][0], path.edges[-1][1]): for x in (path.edges[0][0], path.edges[-1][1]):
b = self.vertex_set_node[x].find() b = self.top_level_blossom(x)
assert self.vertex_mate[b.base_vertex] == -1 assert self.vertex_mate[b.base_vertex] == -1
# The augmenting path looks like this: # The augmenting path looks like this:
@ -1497,11 +1514,11 @@ class MatchingContext:
# Augment the non-trivial blossoms on either side of this edge. # Augment the non-trivial blossoms on either side of this edge.
# No action is necessary for trivial blossoms. # No action is necessary for trivial blossoms.
bx = self.vertex_set_node[x].find() bx = self.top_level_blossom(x)
if isinstance(bx, NonTrivialBlossom): if isinstance(bx, NonTrivialBlossom):
self.augment_blossom(bx, self.trivial_blossom[x]) self.augment_blossom(bx, self.trivial_blossom[x])
by = self.vertex_set_node[y].find() by = self.top_level_blossom(y)
if isinstance(by, NonTrivialBlossom): if isinstance(by, NonTrivialBlossom):
self.augment_blossom(by, self.trivial_blossom[y]) self.augment_blossom(by, self.trivial_blossom[y])
@ -1526,14 +1543,14 @@ class MatchingContext:
""" """
# Assign label S to the blossom that contains vertex "x". # Assign label S to the blossom that contains vertex "x".
bx = self.vertex_set_node[x].find() bx = self.top_level_blossom(x)
self.assign_blossom_label_s(bx) self.assign_blossom_label_s(bx)
# Vertex "x" is matched to T-vertex "y". # Vertex "x" is matched to T-vertex "y".
y = self.vertex_mate[x] y = self.vertex_mate[x]
assert y != -1 assert y != -1
by = self.vertex_set_node[y].find() by = self.top_level_blossom(y)
assert by.label == LABEL_T assert by.label == LABEL_T
assert by.tree_blossoms is not None assert by.tree_blossoms is not None
@ -1556,14 +1573,14 @@ class MatchingContext:
- There is a tight edge between vertices "x" and "y". - There is a tight edge between vertices "x" and "y".
""" """
bx = self.vertex_set_node[x].find() bx = self.top_level_blossom(x)
by = self.vertex_set_node[y].find() by = self.top_level_blossom(y)
assert bx.label == LABEL_S assert bx.label == LABEL_S
# Expand zero-dual blossoms before assigning label T. # Expand zero-dual blossoms before assigning label T.
while isinstance(by, NonTrivialBlossom) and (by.dual_var == 0): while isinstance(by, NonTrivialBlossom) and (by.dual_var == 0):
self.expand_unlabeled_blossom(by) self.expand_unlabeled_blossom(by)
by = self.vertex_set_node[y].find() by = self.top_level_blossom(y)
# Assign label T to the unlabeled blossom. # Assign label T to the unlabeled blossom.
self.assign_blossom_label_t(by) self.assign_blossom_label_t(by)
@ -1597,8 +1614,8 @@ class MatchingContext:
True if the matching was augmented; otherwise False. True if the matching was augmented; otherwise False.
""" """
bx = self.vertex_set_node[x].find() bx = self.top_level_blossom(x)
by = self.vertex_set_node[y].find() by = self.top_level_blossom(y)
assert bx.label == LABEL_S assert bx.label == LABEL_S
assert by.label == LABEL_S assert by.label == LABEL_S
@ -1662,7 +1679,7 @@ class MatchingContext:
for x in self.scan_queue: for x in self.scan_queue:
# Double-check that "x" is an S-vertex. # Double-check that "x" is an S-vertex.
bx = self.vertex_set_node[x].find() bx = self.top_level_blossom(x)
assert bx.label == LABEL_S assert bx.label == LABEL_S
# Scan the edges that are incident on "x". # Scan the edges that are incident on "x".
@ -1682,7 +1699,7 @@ class MatchingContext:
# Try to pull this edge into an alternating tree. # Try to pull this edge into an alternating tree.
# Ignore edges that are internal to a blossom. # Ignore edges that are internal to a blossom.
by = self.vertex_set_node[y].find() by = self.top_level_blossom(y)
if bx is by: if bx is by:
continue continue
@ -1777,7 +1794,7 @@ class MatchingContext:
""" """
for x in range(self.graph.num_vertex): for x in range(self.graph.num_vertex):
assert self.vertex_mate[x] == -1 assert self.vertex_mate[x] == -1
bx = self.vertex_set_node[x].find() bx = self.top_level_blossom(x)
assert bx.base_vertex == x assert bx.base_vertex == x
# Assign label S. # Assign label S.
@ -1828,7 +1845,7 @@ class MatchingContext:
# Use the edge from S-vertex to unlabeled vertex that got # Use the edge from S-vertex to unlabeled vertex that got
# unlocked through the delta update. # unlocked through the delta update.
(x, y, _w) = self.graph.edges[delta_edge] (x, y, _w) = self.graph.edges[delta_edge]
if self.vertex_set_node[x].find().label != LABEL_S: if self.top_level_blossom(x).label != LABEL_S:
(x, y) = (y, x) (x, y) = (y, x)
self.extend_tree_s_to_t(x, y) self.extend_tree_s_to_t(x, y)

View File

@ -11,11 +11,11 @@ _ElemT = TypeVar("_ElemT")
_ElemT2 = TypeVar("_ElemT2") _ElemT2 = TypeVar("_ElemT2")
class UnionFindQueue(Generic[_NameT, _ElemT]): class ConcatenableQueue(Generic[_NameT, _ElemT]):
"""Combination of disjoint set and priority queue. """Priority queue supporting efficient merge and split operations.
This is a combination of a disjoint set and a priority queue.
A queue has a "name", which can be any Python object. A queue has a "name", which can be any Python object.
Each element has associated "data", which can be any Python object. Each element has associated "data", which can be any Python object.
Each element has a priority. Each element has a priority.
@ -27,68 +27,70 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
- Merge two or more queues. - Merge two or more queues.
- Undo a previous merge step. - Undo a previous merge step.
The implementation is essentially an AVL tree, with minimum-priority This data structure is implemented as a 2-3 tree with minimum-priority
tracking added to it. tracking added to it.
""" """
__slots__ = ("name", "tree", "sub_queues", "split_nodes") __slots__ = ("name", "tree", "first_node", "sub_queues")
class Node(Generic[_NameT2, _ElemT2]): class BaseNode(Generic[_NameT2, _ElemT2]):
"""Node in a UnionFindQueue.""" """Node in the 2-3 tree."""
__slots__ = ("owner", "data", "prio", "min_node", "height", __slots__ = ("owner", "min_node", "height", "parent", "childs")
"parent", "left", "right")
def __init__(self, def __init__(self,
owner: UnionFindQueue[_NameT2, _ElemT2], min_node: ConcatenableQueue.Node[_NameT2, _ElemT2],
data: _ElemT2, height: int
prio: float
) -> None: ) -> None:
"""Initialize a new element. """Initialize a new node."""
self.owner: Optional[ConcatenableQueue[_NameT2, _ElemT2]] = None
self.min_node = min_node
self.height = height
self.parent: Optional[ConcatenableQueue.BaseNode[_NameT2,
_ElemT2]]
self.parent = None
self.childs: list[ConcatenableQueue.BaseNode[_NameT2, _ElemT2]]
self.childs = []
class Node(BaseNode[_NameT2, _ElemT2]):
"""Leaf node in the 2-3 tree, representing an element in the queue."""
__slots__ = ("data", "prio")
def __init__(self, data: _ElemT2, prio: float) -> None:
"""Initialize a new leaf node.
This method should not be called directly. This method should not be called directly.
Instead, call UnionFindQueue.insert(). Instead, call ConcatenableQueue.insert().
""" """
self.owner: Optional[UnionFindQueue[_NameT2, _ElemT2]] = owner super().__init__(min_node=self, height=0)
self.data = data self.data = data
self.prio = prio self.prio = prio
self.min_node = self
self.height = 1
self.parent: Optional[UnionFindQueue.Node[_NameT2, _ElemT2]]
self.left: Optional[UnionFindQueue.Node[_NameT2, _ElemT2]]
self.right: Optional[UnionFindQueue.Node[_NameT2, _ElemT2]]
self.parent = None
self.left = None
self.right = None
def find(self) -> _NameT2: def find(self) -> _NameT2:
"""Return the name of the queue that contains this element. """Return the name of the queue that contains this element.
This function takes time O(log(n)). This function takes time O(log(n)).
""" """
node = self node: ConcatenableQueue.BaseNode[_NameT2, _ElemT2] = self
while node.parent is not None: while node.parent is not None:
node = node.parent node = node.parent
assert node.owner is not None assert node.owner is not None
return node.owner.name return node.owner.name
def set_prio(self, prio: float) -> None: def set_prio(self, prio: float) -> None:
"""Change the priority of this element.""" """Change the priority of this element.
This function takes time O(log(n)).
"""
self.prio = prio self.prio = prio
node = self node = self.parent
while True: while node is not None:
min_node = node min_node = node.childs[0].min_node
if node.left is not None: for child in node.childs[1:]:
left_min_node = node.left.min_node if child.min_node.prio < min_node.prio:
if left_min_node.prio < min_node.prio: min_node = child.min_node
min_node = left_min_node
if node.right is not None:
right_min_node = node.right.min_node
if right_min_node.prio < min_node.prio:
min_node = right_min_node
node.min_node = min_node node.min_node = min_node
if node.parent is None:
break
node = node.parent node = node.parent
def __init__(self, name: _NameT) -> None: def __init__(self, name: _NameT) -> None:
@ -100,9 +102,10 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
name: Name to assign to the new queue. name: Name to assign to the new queue.
""" """
self.name = name self.name = name
self.tree: Optional[UnionFindQueue.Node[_NameT, _ElemT]] = None self.tree: Optional[ConcatenableQueue.BaseNode[_NameT, _ElemT]] = None
self.sub_queues: list[UnionFindQueue[_NameT, _ElemT]] = [] self.first_node: Optional[ConcatenableQueue.Node[_NameT, _ElemT]]
self.split_nodes: list[UnionFindQueue.Node[_NameT, _ElemT]] = [] self.first_node = None
self.sub_queues: list[ConcatenableQueue[_NameT, _ElemT]] = []
def clear(self) -> None: def clear(self) -> None:
"""Remove all elements from the queue. """Remove all elements from the queue.
@ -111,8 +114,8 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
""" """
node = self.tree node = self.tree
self.tree = None self.tree = None
self.first_node = None
self.sub_queues = [] self.sub_queues = []
self.split_nodes.clear()
# Wipe pointers to enable refcounted garbage collection. # Wipe pointers to enable refcounted garbage collection.
if node is not None: if node is not None:
@ -120,12 +123,8 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
while node is not None: while node is not None:
node.min_node = None # type: ignore node.min_node = None # type: ignore
prev_node = node prev_node = node
if node.left is not None: if node.childs:
node = node.left node = node.childs.pop()
prev_node.left = None
elif node.right is not None:
node = node.right
prev_node.right = None
else: else:
node = node.parent node = node.parent
prev_node.parent = None prev_node.parent = None
@ -143,7 +142,9 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
prio: Initial priority of the new element. prio: Initial priority of the new element.
""" """
assert self.tree is None assert self.tree is None
self.tree = UnionFindQueue.Node(self, elem, prio) self.tree = ConcatenableQueue.Node(elem, prio)
self.tree.owner = self
self.first_node = self.tree
return self.tree return self.tree
def min_prio(self) -> float: def min_prio(self) -> float:
@ -167,7 +168,7 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
return node.min_node.data return node.min_node.data
def merge(self, def merge(self,
sub_queues: list[UnionFindQueue[_NameT, _ElemT]] sub_queues: list[ConcatenableQueue[_NameT, _ElemT]]
) -> None: ) -> None:
"""Merge the specified queues. """Merge the specified queues.
@ -184,7 +185,6 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
""" """
assert self.tree is None assert self.tree is None
assert not self.sub_queues assert not self.sub_queues
assert not self.split_nodes
assert sub_queues assert sub_queues
# Keep the list of sub-queues. # Keep the list of sub-queues.
@ -193,6 +193,7 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
# Move the root node from the first sub-queue to this queue. # Move the root node from the first sub-queue to this queue.
# Clear its owner pointer. # Clear its owner pointer.
self.tree = sub_queues[0].tree self.tree = sub_queues[0].tree
self.first_node = sub_queues[0].first_node
assert self.tree is not None assert self.tree is not None
sub_queues[0].tree = None sub_queues[0].tree = None
self.tree.owner = None self.tree.owner = None
@ -208,10 +209,7 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
subtree.owner = None subtree.owner = None
# Merge our current tree with the tree from the sub-queue. # Merge our current tree with the tree from the sub-queue.
(self.tree, split_node) = self._merge_tree(self.tree, subtree) self.tree = self._join(self.tree, subtree)
# Keep track of the left-most node from the sub-queue.
self.split_nodes.append(split_node)
# Put the owner pointer in the root node. # Put the owner pointer in the root node.
self.tree.owner = self self.tree.owner = self
@ -234,10 +232,10 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
self.tree.owner = None self.tree.owner = None
# Split the tree to reconstruct each sub-queue. # Split the tree to reconstruct each sub-queue.
for (sub, split_node) in zip(self.sub_queues[:0:-1], for sub in self.sub_queues[:0:-1]:
self.split_nodes[::-1]):
(tree, rtree) = self._split_tree(split_node) assert sub.first_node is not None
(tree, rtree) = self._split_tree(sub.first_node)
# Assign the right tree to the sub-queue. # Assign the right tree to the sub-queue.
sub.tree = rtree sub.tree = rtree
@ -249,423 +247,184 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
# Make this queue empty. # Make this queue empty.
self.tree = None self.tree = None
self.first_node = None
self.sub_queues = [] self.sub_queues = []
self.split_nodes.clear()
@staticmethod @staticmethod
def _repair_node(node: Node[_NameT, _ElemT]) -> None: def _repair_node(node: BaseNode[_NameT, _ElemT]) -> None:
"""Recalculate the height and min-priority information of the """Repair min_prio attribute of an internal node."""
specified node. min_node = node.childs[0].min_node
""" for child in node.childs[1:]:
if child.min_node.prio < min_node.prio:
# Repair node height. min_node = child.min_node
lh = 0 if node.left is None else node.left.height
rh = 0 if node.right is None else node.right.height
node.height = 1 + max(lh, rh)
# Repair min-priority.
min_node = node
if node.left is not None:
left_min_node = node.left.min_node
if left_min_node.prio < min_node.prio:
min_node = left_min_node
if node.right is not None:
right_min_node = node.right.min_node
if right_min_node.prio < min_node.prio:
min_node = right_min_node
node.min_node = min_node node.min_node = min_node
def _rotate_left(self, @staticmethod
node: Node[_NameT, _ElemT] def _new_internal_node(ltree: BaseNode[_NameT, _ElemT],
) -> Node[_NameT, _ElemT]: rtree: BaseNode[_NameT, _ElemT]
"""Rotate the specified subtree to the left. ) -> BaseNode[_NameT, _ElemT]:
"""Create a new internal node with 2 child nodes."""
Return the new root node of the subtree. assert ltree.height == rtree.height
""" height = ltree.height + 1
# if ltree.min_node.prio <= rtree.min_node.prio:
# N C min_node = ltree.min_node
# / \ / \
# A C ---> N D
# / \ / \
# B D A B
#
parent = node.parent
new_top = node.right
assert new_top is not None
node.right = new_top.left
if node.right is not None:
node.right.parent = node
new_top.left = node
new_top.parent = parent
node.parent = new_top
if parent is not None:
if parent.left is node:
parent.left = new_top
elif parent.right is node:
parent.right = new_top
self._repair_node(node)
self._repair_node(new_top)
return new_top
def _rotate_right(self,
node: Node[_NameT, _ElemT]
) -> Node[_NameT, _ElemT]:
"""Rotate the specified node to the right.
Return the new root node of the subtree.
"""
#
# N A
# / \ / \
# A D ---> B N
# / \ / \
# B C C D
#
parent = node.parent
new_top = node.left
assert new_top is not None
node.left = new_top.right
if node.left is not None:
node.left.parent = node
new_top.right = node
new_top.parent = parent
node.parent = new_top
if parent is not None:
if parent.left is node:
parent.left = new_top
elif parent.right is node:
parent.right = new_top
self._repair_node(node)
self._repair_node(new_top)
return new_top
def _rebalance_up(self,
node: Node[_NameT, _ElemT]
) -> Node[_NameT, _ElemT]:
"""Repair and rebalance the specified node and its ancestors.
Return the root node of the rebalanced tree.
"""
# Walk up to the root of the tree.
while True:
lh = 0 if node.left is None else node.left.height
rh = 0 if node.right is None else node.right.height
if lh > rh + 1:
# This node is left-heavy. Rotate right to rebalance.
#
# N L
# / \ / \
# L \ / N
# / \ \ ---> / / \
# A B \ A B \
# \ \
# R R
#
lchild = node.left
assert lchild is not None
if ((lchild.right is not None)
and ((lchild.left is None)
or (lchild.right.height > lchild.left.height))):
# Double rotation.
lchild = self._rotate_left(lchild)
node = self._rotate_right(node)
elif lh + 1 < rh:
# This node is right-heavy. Rotate left to rebalance.
#
# N R
# / \ / \
# / R N \
# / / \ ---> / \ \
# / A B / A B
# / /
# L L
#
rchild = node.right
assert rchild is not None
if ((rchild.left is not None)
and ((rchild.right is None)
or (rchild.left.height > rchild.right.height))):
# Double rotation.
rchild = self._rotate_right(rchild)
node = self._rotate_left(node)
else: else:
# No rotation. Must still repair node though. min_node = rtree.min_node
self._repair_node(node) node = ConcatenableQueue.BaseNode(min_node, height)
node.childs = [ltree, rtree]
if node.parent is None: ltree.parent = node
break rtree.parent = node
# Continue rebalancing at the parent.
node = node.parent
# Return new root node.
return node return node
def _join_right(self, def _join_right(self,
ltree: Node[_NameT, _ElemT], ltree: BaseNode[_NameT, _ElemT],
node: Node[_NameT, _ElemT], rtree: BaseNode[_NameT, _ElemT]
rtree: Optional[Node[_NameT, _ElemT]] ) -> BaseNode[_NameT, _ElemT]:
) -> Node[_NameT, _ElemT]: """Join two trees together.
"""Join a left subtree, middle node and right subtree together.
The left subtree must be higher than the right subtree. The initial left subtree must be higher than the right subtree.
Return the root node of the joined tree. Return the root node of the joined tree.
""" """
lh = ltree.height
rh = 0 if rtree is None else rtree.height
assert lh > rh + 1
# Descend down the right spine of "ltree". # Descend down the right spine of the left tree until we
# Stop at a node with compatible height, then insert "node" # reach a node just above the right tree.
# and attach "rtree". node = ltree
while node.height > rtree.height + 1:
node = node.childs[-1]
assert node.height == rtree.height + 1
# Find a node in the left tree to insert the right tree as child.
while len(node.childs) == 3:
# This node already has 3 childs so we can not add the right tree.
# Rearrange into 2 nodes with 2 childs each, then solve it
# at the parent level.
# #
# ltree # N N R'
# / \ # / | \ / \ / \
# X # / | \ ---> / \ / \
# / \ # A B C R A B C R
# X <-- cur
# / \
# node
# / \
# X rtree
# #
child = node.childs.pop()
self._repair_node(node)
rtree = self._new_internal_node(child, rtree)
if node.parent is None:
# Create a new root node.
return self._new_internal_node(node, rtree)
node = node.parent
# Descend to a point with compatible height. # Insert the right tree as child of this node.
cur = ltree assert len(node.childs) < 3
while (cur.right is not None) and (cur.right.height > rh + 1): node.childs.append(rtree)
cur = cur.right
# Insert "node" and "rtree".
node.left = cur.right
node.right = rtree
if node.left is not None:
node.left.parent = node
if rtree is not None:
rtree.parent = node rtree.parent = node
cur.right = node
node.parent = cur
# A double rotation may be necessary. # Repair min-prio pointers of ancestors.
if (cur.left is None) or (cur.left.height <= rh): while True:
node = self._rotate_right(node)
cur = self._rotate_left(cur)
else:
self._repair_node(node) self._repair_node(node)
self._repair_node(cur) if node.parent is None:
break
node = node.parent
# Ascend from "cur" to the root of the tree.
# Repair and/or rotate as needed.
while cur.parent is not None:
cur = cur.parent
assert cur.left is not None
assert cur.right is not None
if cur.left.height + 1 < cur.right.height:
cur = self._rotate_left(cur)
else:
self._repair_node(cur)
return cur
def _join_left(self,
ltree: Optional[Node[_NameT, _ElemT]],
node: Node[_NameT, _ElemT],
rtree: Node[_NameT, _ElemT]
) -> Node[_NameT, _ElemT]:
"""Join a left subtree, middle node and right subtree together.
The right subtree must be higher than the left subtree.
Return the root node of the joined tree.
"""
lh = 0 if ltree is None else ltree.height
rh = rtree.height
assert lh + 1 < rh
# Descend down the left spine of "rtree".
# Stop at a node with compatible height, then insert "node"
# and attach "ltree".
#
# rtree
# / \
# X
# / \
# cur --> X
# / \
# node
# / \
# ltree X
#
# Descend to a point with compatible height.
cur = rtree
while (cur.left is not None) and (cur.left.height > lh + 1):
cur = cur.left
# Insert "node" and "ltree".
node.left = ltree
node.right = cur.left
if ltree is not None:
ltree.parent = node
if node.right is not None:
node.right.parent = node
cur.left = node
node.parent = cur
# A double rotation may be necessary.
if (cur.right is None) or (cur.right.height <= lh):
node = self._rotate_left(node)
cur = self._rotate_right(cur)
else:
self._repair_node(node)
self._repair_node(cur)
# Ascend from "cur" to the root of the tree.
# Repair and/or rotate as needed.
while cur.parent is not None:
cur = cur.parent
assert cur.left is not None
assert cur.right is not None
if cur.left.height > cur.right.height + 1:
cur = self._rotate_right(cur)
else:
self._repair_node(cur)
return cur
def _join(self,
ltree: Optional[Node[_NameT, _ElemT]],
node: Node[_NameT, _ElemT],
rtree: Optional[Node[_NameT, _ElemT]]
) -> Node[_NameT, _ElemT]:
"""Join a left subtree, middle node and right subtree together.
The left or right subtree may initially be a child of the middle
node; such links will be broken as needed.
The left and right subtrees must be consistent, AVL-balanced trees.
Parent pointers of the subtrees are ignored.
The middle node is considered as a single node.
Its parent and child pointers are ignored.
Return the root node of the joined tree.
"""
lh = 0 if ltree is None else ltree.height
rh = 0 if rtree is None else rtree.height
if lh > rh + 1:
assert ltree is not None
ltree.parent = None
return self._join_right(ltree, node, rtree)
elif lh + 1 < rh:
assert rtree is not None
rtree.parent = None
return self._join_left(ltree, node, rtree)
else:
# Subtree heights are compatible. Just join them.
#
# node
# / \
# ltree rtree
# / \ / \
#
node.parent = None
node.left = ltree
if ltree is not None:
ltree.parent = node
node.right = rtree
if rtree is not None:
rtree.parent = node
self._repair_node(node)
return node return node
def _merge_tree(self, def _join_left(self,
ltree: Node[_NameT, _ElemT], ltree: BaseNode[_NameT, _ElemT],
rtree: Node[_NameT, _ElemT] rtree: BaseNode[_NameT, _ElemT]
) -> tuple[Node[_NameT, _ElemT], Node[_NameT, _ElemT]]: ) -> BaseNode[_NameT, _ElemT]:
"""Merge two trees. """Join two trees together.
Return a tuple (split_node, merged_tree). The initial left subtree must be lower than the right subtree.
Return the root node of the joined tree.
""" """
# Find the left-most node of the right tree. # Descend down the left spine of the right tree until we
split_node = rtree # reach a node just above the left tree.
while split_node.left is not None: node = rtree
split_node = split_node.left while node.height > ltree.height + 1:
node = node.childs[0]
# Delete the split_node from its tree. assert node.height == ltree.height + 1
parent = split_node.parent
if split_node.right is not None: # Find a node in the right tree to insert the left tree as child.
split_node.right.parent = parent while len(node.childs) == 3:
if parent is None: # This node already has 3 childs so we can not add the left tree.
rtree_new = split_node.right # Rearrange into 2 nodes with 2 childs each, then solve it
# at the parent level.
#
# N L' N
# / | \ / \ / \
# / | \ ---> / \ / \
# L A B C L A B C
#
child = node.childs.pop(0)
self._repair_node(node)
ltree = self._new_internal_node(ltree, child)
if node.parent is None:
# Create a new root node.
return self._new_internal_node(ltree, node)
node = node.parent
# Insert the left tree as child of this node.
assert len(node.childs) < 3
node.childs.insert(0, ltree)
ltree.parent = node
# Repair min-prio pointers of ancestors.
while True:
self._repair_node(node)
if node.parent is None:
break
node = node.parent
return node
def _join(self,
ltree: BaseNode[_NameT, _ElemT],
rtree: BaseNode[_NameT, _ElemT]
) -> BaseNode[_NameT, _ElemT]:
"""Join two trees together.
The left and right subtree must be consistent 2-3 trees.
Initial parent pointers of these subtrees are ignored.
Return the root node of the joined tree.
"""
if ltree.height > rtree.height:
return self._join_right(ltree, rtree)
elif ltree.height < rtree.height:
return self._join_left(ltree, rtree)
else: else:
# Repair and rebalance the ancestors of split_node. return self._new_internal_node(ltree, rtree)
parent.left = split_node.right
rtree_new = self._rebalance_up(parent)
# Join the two trees via the split_node.
merged_tree = self._join(ltree, split_node, rtree_new)
return (merged_tree, split_node)
def _split_tree(self, def _split_tree(self,
split_node: Node[_NameT, _ElemT] split_node: BaseNode[_NameT, _ElemT]
) -> tuple[Node[_NameT, _ElemT], Node[_NameT, _ElemT]]: ) -> tuple[BaseNode[_NameT, _ElemT],
BaseNode[_NameT, _ElemT]]:
"""Split a tree on a specified node. """Split a tree on a specified node.
Two new trees will be constructed. Two new trees will be constructed.
All nodes to the left of "split_node" will go to the left tree. Leaf nodes to the left of "split_node" will go to the left tree.
All nodes to the right of "split_node", and "split_node" itself, Leaf nodes to the right of "split_node", and "split_node" itself,
will go to the right tree. will go to the right tree.
Return tuple (ltree, rtree), Return tuple (ltree, rtree).
where ltree contains all nodes left of the split-node,
rtree contains the split-nodes and all nodes to its right.
""" """
# Assign the descendants of "split_node" to the appropriate trees # Detach "split_node" from its parent.
# and detach them from "split_node". # Assign it to the right tree.
ltree = split_node.left
rtree = split_node.right
split_node.left = None
split_node.right = None
if ltree is not None:
ltree.parent = None
if rtree is not None:
rtree.parent = None
# Detach "split_node" from its parent (if any).
parent = split_node.parent parent = split_node.parent
split_node.parent = None split_node.parent = None
# Assign "split_node" to the right tree. # The left tree is initially empty.
rtree = self._join(None, split_node, rtree) # The right tree initially contains only "split_node".
ltree: Optional[ConcatenableQueue.BaseNode[_NameT, _ElemT]] = None
rtree = split_node
# Walk up to the root of the tree. # Walk up to the root of the tree.
# On the way up, detach each node from its parent and join it, # On the way up, detach each node from its parent and join its
# and its descendants, to the appropriate tree. # child nodes to the appropriate tree.
node = split_node node = split_node
while parent is not None: while parent is not None:
@ -677,16 +436,53 @@ class UnionFindQueue(Generic[_NameT, _ElemT]):
# Detach "node" from its parent. # Detach "node" from its parent.
node.parent = None node.parent = None
if node.left is child: if len(node.childs) == 3:
# "split_node" was located in the left subtree of "node". if node.childs[0] is child:
# This implies that "node" must be joined to the right tree. # "node" has 3 child nodes.
rtree = self._join(rtree, node, node.right) # Its left subtree has already been split.
# Turn it into a 2-node and join it to the right tree.
node.childs.pop(0)
self._repair_node(node)
rtree = self._join(rtree, node)
elif node.childs[2] is child:
# "node" has 3 child nodes.
# Its right subtree has already been split.
# Turn it into a 2-node and join it to the left tree.
node.childs.pop()
self._repair_node(node)
if ltree is None:
ltree = node
else:
ltree = self._join(node, ltree)
else:
# "node has 3 child nodes.
# Its middle subtree has already been split.
# Join its left child to the left tree, and its right
# child to the right tree, then delete "node".
node.childs[0].parent = None
node.childs[2].parent = None
if ltree is None:
ltree = node.childs[0]
else:
ltree = self._join(node.childs[0], ltree)
rtree = self._join(rtree, node.childs[2])
elif node.childs[0] is child:
# "node" has 2 child nodes.
# Its left subtree has already been split.
# Join its right child to the right tree, then delete "node".
node.childs[1].parent = None
rtree = self._join(rtree, node.childs[1])
else: else:
# "split_node" was located in the right subtree of "node". # "node" has 2 child nodes.
# This implies that "node" must be joined to the right tree. # Its right subtree has already been split.
assert node.right is child # Join its left child to the left tree, then delete "node".
ltree = self._join(node.left, node, ltree) node.childs[0].parent = None
if ltree is None:
ltree = node.childs[0]
else:
ltree = self._join(node.childs[0], ltree)
assert ltree is not None assert ltree is not None
return (ltree, rtree) return (ltree, rtree)

View File

@ -574,6 +574,60 @@ class TestVerificationFail(unittest.TestCase):
verify_optimum(ctx) verify_optimum(ctx)
class TestValueRange(unittest.TestCase):
"""Test graphs that force big values for dual variables or edge slack."""
def test_big_blossom_dual(self):
"""Force modified blossom dual close to 2*maxweight."""
#
# [3]
# / \
# W-4/ \W
# 7 / \
# [1]-----[2]-----[4]
# | W-4
# 5|
# | 1
# [5]-----[6]
#
w = 100000
pairs = mwm([(1,2,7), (2,3,w-4), (2,4,w-4), (2,5,5), (3,4,w), (5,6,1)])
self.assertEqual(pairs, [(1,2), (3,4), (5,6)])
def test_negative_blossom_dual(self):
"""Force modified blossom dual close to -maxweight."""
#
# [3]
# / \
# 5/ \7
# 1 / \
# [1]-----[2]-----[4]
# | 5
# 1|
# | W
# [5]-----[6]
#
w = 100000
pairs = mwm([(1,2,1), (2,3,5), (2,4,5), (2,5,1), (3,4,7), (5,6,w)])
self.assertEqual(pairs, [(1,2), (3,4), (5,6)])
def test_big_edge_slack(self):
"""Force modified edge slack close to 3*maxweight."""
#
# 6 W W-2 5
# [1]-----[2]-----[3]-----[4]-----[5]
# | |
# |1 |1
# | |
# [6]-----[7]-----[8]-----[9]-----[10]
# 6 W W-2 5
#
w = 100000
pairs = mwm([(1,2,6), (1,6,1), (2,3,w), (3,4,w-2), (3,8,1), (4,5,5),
(6,7,6), (7,8,w), (8,9,w-2), (9,10,5)])
self.assertEqual(pairs, [(1,6), (2,3), (4,5), (7,8), (9,10)])
if __name__ == "__main__": if __name__ == "__main__":
unittest.main() unittest.main()

View File

@ -3,11 +3,11 @@
import random import random
import unittest import unittest
from mwmatching.datastruct import UnionFindQueue, PriorityQueue from mwmatching.datastruct import ConcatenableQueue, PriorityQueue
class TestUnionFindQueue(unittest.TestCase): class TestConcatenableQueue(unittest.TestCase):
"""Test UnionFindQueue.""" """Test ConcatenableQueue."""
def _check_tree(self, queue): def _check_tree(self, queue):
"""Check tree balancing rules and priority info.""" """Check tree balancing rules and priority info."""
@ -20,31 +20,24 @@ class TestUnionFindQueue(unittest.TestCase):
node = nodes.pop() node = nodes.pop()
if node.left is not None:
self.assertIs(node.left.parent, node)
nodes.append(node.left)
if node.right is not None:
self.assertIs(node.right.parent, node)
nodes.append(node.right)
if node is not queue.tree: if node is not queue.tree:
self.assertIsNone(node.owner) self.assertIsNone(node.owner)
lh = 0 if node.left is None else node.left.height if node.height == 0:
rh = 0 if node.right is None else node.right.height self.assertEqual(len(node.childs), 0)
self.assertEqual(node.height, 1 + max(lh, rh)) self.assertIs(node.min_node, node)
else:
self.assertLessEqual(lh, rh + 1) self.assertIn(len(node.childs), (2, 3))
self.assertLessEqual(rh, lh + 1) best_node = set()
best_prio = None
best_node = {node} for child in node.childs:
best_prio = node.prio self.assertIs(child.parent, node)
for child in (node.left, node.right): self.assertEqual(child.height, node.height - 1)
if child is not None: nodes.append(child)
if child.min_node.prio < best_prio: if ((best_prio is None)
best_prio = child.min_node.prio or (child.min_node.prio < best_prio)):
best_node = {child.min_node} best_node = {child.min_node}
best_prio = child.min_node.prio
elif child.min_node.prio == best_prio: elif child.min_node.prio == best_prio:
best_node.add(child.min_node) best_node.add(child.min_node)
@ -53,7 +46,7 @@ class TestUnionFindQueue(unittest.TestCase):
def test_single(self): def test_single(self):
"""Single element.""" """Single element."""
q = UnionFindQueue("Q") q = ConcatenableQueue("Q")
with self.assertRaises(Exception): with self.assertRaises(Exception):
q.min_prio() q.min_prio()
@ -62,7 +55,7 @@ class TestUnionFindQueue(unittest.TestCase):
q.min_elem() q.min_elem()
n = q.insert("a", 4) n = q.insert("a", 4)
self.assertIsInstance(n, UnionFindQueue.Node) self.assertIsInstance(n, ConcatenableQueue.Node)
self._check_tree(q) self._check_tree(q)
@ -84,22 +77,22 @@ class TestUnionFindQueue(unittest.TestCase):
def test_simple(self): def test_simple(self):
"""Simple test, 5 elements.""" """Simple test, 5 elements."""
q1 = UnionFindQueue("A") q1 = ConcatenableQueue("A")
n1 = q1.insert("a", 5) n1 = q1.insert("a", 5)
q2 = UnionFindQueue("B") q2 = ConcatenableQueue("B")
n2 = q2.insert("b", 6) n2 = q2.insert("b", 6)
q3 = UnionFindQueue("C") q3 = ConcatenableQueue("C")
n3 = q3.insert("c", 7) n3 = q3.insert("c", 7)
q4 = UnionFindQueue("D") q4 = ConcatenableQueue("D")
n4 = q4.insert("d", 4) n4 = q4.insert("d", 4)
q5 = UnionFindQueue("E") q5 = ConcatenableQueue("E")
n5 = q5.insert("e", 3) n5 = q5.insert("e", 3)
q345 = UnionFindQueue("P") q345 = ConcatenableQueue("P")
q345.merge([q3, q4, q5]) q345.merge([q3, q4, q5])
self._check_tree(q345) self._check_tree(q345)
@ -120,7 +113,7 @@ class TestUnionFindQueue(unittest.TestCase):
self.assertEqual(q345.min_prio(), 4) self.assertEqual(q345.min_prio(), 4)
self.assertEqual(q345.min_elem(), "d") self.assertEqual(q345.min_elem(), "d")
q12 = UnionFindQueue("Q") q12 = ConcatenableQueue("Q")
q12.merge([q1, q2]) q12.merge([q1, q2])
self._check_tree(q12) self._check_tree(q12)
@ -129,7 +122,7 @@ class TestUnionFindQueue(unittest.TestCase):
self.assertEqual(q12.min_prio(), 5) self.assertEqual(q12.min_prio(), 5)
self.assertEqual(q12.min_elem(), "a") self.assertEqual(q12.min_elem(), "a")
q12345 = UnionFindQueue("R") q12345 = ConcatenableQueue("R")
q12345.merge([q12, q345]) q12345.merge([q12, q345])
self._check_tree(q12345) self._check_tree(q12345)
@ -201,30 +194,30 @@ class TestUnionFindQueue(unittest.TestCase):
queues = [] queues = []
nodes = [] nodes = []
for i in range(14): for i in range(14):
q = UnionFindQueue(chr(ord("A") + i)) q = ConcatenableQueue(chr(ord("A") + i))
n = q.insert(chr(ord("a") + i), prios[i]) n = q.insert(chr(ord("a") + i), prios[i])
queues.append(q) queues.append(q)
nodes.append(n) nodes.append(n)
q = UnionFindQueue("AB") q = ConcatenableQueue("AB")
q.merge(queues[0:2]) q.merge(queues[0:2])
queues.append(q) queues.append(q)
self._check_tree(q) self._check_tree(q)
self.assertEqual(q.min_prio(), min(prios[0:2])) self.assertEqual(q.min_prio(), min(prios[0:2]))
q = UnionFindQueue("CDE") q = ConcatenableQueue("CDE")
q.merge(queues[2:5]) q.merge(queues[2:5])
queues.append(q) queues.append(q)
self._check_tree(q) self._check_tree(q)
self.assertEqual(q.min_prio(), min(prios[2:5])) self.assertEqual(q.min_prio(), min(prios[2:5]))
q = UnionFindQueue("FGHI") q = ConcatenableQueue("FGHI")
q.merge(queues[5:9]) q.merge(queues[5:9])
queues.append(q) queues.append(q)
self._check_tree(q) self._check_tree(q)
self.assertEqual(q.min_prio(), min(prios[5:9])) self.assertEqual(q.min_prio(), min(prios[5:9]))
q = UnionFindQueue("JKLMN") q = ConcatenableQueue("JKLMN")
q.merge(queues[9:14]) q.merge(queues[9:14])
queues.append(q) queues.append(q)
self._check_tree(q) self._check_tree(q)
@ -239,7 +232,7 @@ class TestUnionFindQueue(unittest.TestCase):
for i in range(9, 14): for i in range(9, 14):
self.assertEqual(nodes[i].find(), "JKLMN") self.assertEqual(nodes[i].find(), "JKLMN")
q = UnionFindQueue("ALL") q = ConcatenableQueue("ALL")
q.merge(queues[14:18]) q.merge(queues[14:18])
queues.append(q) queues.append(q)
self._check_tree(q) self._check_tree(q)
@ -298,7 +291,7 @@ class TestUnionFindQueue(unittest.TestCase):
for i in range(4000): for i in range(4000):
name = f"q{i}" name = f"q{i}"
q = UnionFindQueue(name) q = ConcatenableQueue(name)
p = rng.random() p = rng.random()
n = q.insert(f"n{i}", p) n = q.insert(f"n{i}", p)
nodes.append(n) nodes.append(n)
@ -326,7 +319,7 @@ class TestUnionFindQueue(unittest.TestCase):
subs = rng.sample(sorted(live_queues), k) subs = rng.sample(sorted(live_queues), k)
name = f"Q{i}" name = f"Q{i}"
q = UnionFindQueue(name) q = ConcatenableQueue(name)
q.merge([queues[nn] for nn in subs]) q.merge([queues[nn] for nn in subs])
self._check_tree(q) self._check_tree(q)
queues[name] = q queues[name] = q