1
0
Fork 0

Compare commits

..

No commits in common. "e103a493fc6b30b3a17f4daabd78df1637a43ec4" and "e2f5b63a01ef656a41db35e709a4efdf9c986632" have entirely different histories.

6 changed files with 791 additions and 882 deletions

View File

@ -7,7 +7,7 @@ For an edge-weighted graph, a _maximum weight matching_ is a matching that achie
the largest possible sum of weights of matched edges.
The code in this repository is based on a variant of the blossom algorithm that runs in
_O(n m log n)_ steps.
_O(n \* m \* log(n))_ steps.
See the file [Algorithm.md](doc/Algorithm.md) for a detailed description.
@ -44,7 +44,7 @@ The folder [cpp/](cpp/) contains a header-only C++ implementation of maximum wei
**NOTE:**
The C++ code currently implements a slower algorithm that runs in _O(n<sup>3</sup>)_ steps.
I plan to eventually update the C++ code to implement the faster _O(n m log n)_ algorithm.
I plan to eventually update the C++ code to implement the faster _O(n*m*log(n))_ algorithm.
The C++ code is self-contained and can easily be linked into an application.
It is also reasonably efficient.

View File

@ -4,20 +4,19 @@
## Introduction
This document describes the implementation of an algorithm that computes a maximum weight matching
in a general graph in time _O(n (n + m) log n)_, where _n_ is the number of vertices in
the graph and _m_ is the number of edges.
in a general graph in time _O(n<sup>3</sup>)_, where _n_ is the number of vertices in the graph.
In graph theory, a _matching_ is a subset of edges such that none of them share a common vertex.
A _maximum cardinality matching_ is a matching that contains the largest possible number of edges
(or equivalently, the largest possible number of vertices).
If a graph has weights assigned to its edges, a _maximum weight matching_
For a graph that has weights attached to its edges, a _maximum weight matching_
is a matching that achieves the largest possible sum of weights of matched edges.
An algorithm for maximum weight matching can obviously also be used to calculate a maximum
cardinality matching by simply assigning weight 1 to all edges.
Certain related problems can be understood as _restrictions_ of maximum weighted matching
Certain computer science problems can be understood as _restrictions_ of maximum weighted matching
in general graphs.
Examples are: maximum matching in bipartite graphs, maximum cardinality matching in general graphs,
and maximum weighted matching in general graphs with edge weights limited to integers
@ -49,7 +48,7 @@ It is based on the ideas of Edmonds, but uses different data structures to reduc
of work.
In 1983, Galil, Micali and Gabow published a maximum weighted matching algorithm that runs in
time _O(n m log n)_ [[4]](#galil_micali_gabow1986) .
time _O(n\*m\*log(n))_ [[4]](#galil_micali_gabow1986) .
It is an implementation of Edmonds' blossom algorithm that uses advanced data structures
to speed up critical parts of the algorithm.
This algorithm is asymptotically faster than _O(n<sup>3</sup>)_ for sparse graphs,
@ -64,32 +63,45 @@ of the literature.
The paper describes a maximum weighted matching algorithm that is similar to Edmonds'
blossom algorithm, but carefully implemented to run in time _O(n<sup>3</sup>)_.
It then sketches how advanced data structures can be added to arrive at the Galil-Micali-Gabow
algorithm that runs in time _O(n m log n)_.
algorithm that runs in time _O(n\*m\*log(n))_.
In 1990, Gabow published a maximum weighted matching algorithm that runs in time
_O(n m + n<sup>2</sup> log n)_ [[6]](#gabow1990) .
_O(n\*m + n<sup>2</sup>\*log(n))_ [[6]](#gabow1990) .
It uses several advanced data structures, including Fibonacci heaps.
Unfortunately I don't understand this algorithm at all.
## Choosing an algorithm
I selected the _O(n m log n)_ algorithm by Galil, Micali and Gabow
[[4]](#galil_micali_gabow1986).
I selected the _O(n<sup>3</sup>)_ variant of Edmonds' blossom algorithm as described by
Galil [[5]](#galil1986) .
This algorithm is usually credited to Gabow [[3]](#gabow1974), but I find the description
in [[5]](#galil1986) easier to understand.
This algorithm is asymptotically optimal for sparse graphs.
It has also been shown to be quite fast in practice on several types of graphs
including random graphs [[7]](#mehlhorn_schafer2002).
This is generally a fine algorithm.
One of its strengths is that it is relatively easy to understand, especially compared
to the more recent algorithms.
Its run time is asymptotically optimal for complete graphs (graphs that have an edge
between every pair of vertices).
On the other hand, the algorithm is suboptimal for sparse graphs.
It is possible to construct highly sparse graphs, having _m = O(n)_,
that cause this algorithm to perform _&Theta;(n<sup>3</sup>)_ steps.
In such cases the Galil-Micali-Gabow algorithm would be significantly faster.
Then again, there is a question of how important the worst case is for practical applications.
I suspect that the simple algorithm typically needs only about _O(n\*m)_ steps when running
on random sparse graphs with random weights, i.e. much faster than its worst case bound.
After trading off these properties, I decided that I prefer an algorithm that is understandable
and has decent performance, over an algorithm that is faster in specific cases but also
significantly more complicated.
This algorithm is more difficult to implement than the older _O(n<sup>3</sup>)_ algorithm.
In particular, it requires a specialized data structure to implement concatenable priority queues.
This increases the size and complexity of the code quite a bit.
However, in my opinion the performance improvement is worth the extra effort.
## Description of the algorithm
My implementation roughly follows the description by Zvi Galil in [[5]](#galil1986) .
I recommend reading that paper before diving into my description below.
My implementation closely follows the description by Zvi Galil in [[5]](#galil1986) .
I recommend to read that paper before diving into my description below.
The paper explains the algorithm in depth and shows how it relates to matching
in bipartite graphs and non-weighted graphs.
@ -241,14 +253,14 @@ have _slack_.
An augmenting path that consists only of tight edges is _guaranteed_ to increase the weight
of the matching as much as possible.
While searching for an augmenting path, we restrict the search to tight edges,
While searching for an augmenting path, we simply restrict the search to tight edges,
ignoring all edges that have slack.
Certain explicit actions of the algorithm cause edges to become tight or slack.
How this works will be explained later.
To find an augmenting path, the algorithm searches for alternating paths that start
in an unmatched vertex.
The collection of such alternating paths forms a forest of trees.
The collection of alternating paths forms a forest of trees.
Each tree is rooted in an unmatched vertex, and all paths from the root to the leaves of a tree
are alternating paths.
The nodes in these trees are top-level blossoms.
@ -277,8 +289,8 @@ an odd-length alternating cycle.
The lowest common ancestor node in the alternating tree forms the beginning and end
of the alternating cycle.
In this case a new blossom must be created by shrinking the cycle.
On the other hand, if the two S-blossoms are in different alternating trees,
the edge that links the blossoms is part of an augmenting path between the roots of the two trees.
If the two S-blossoms are in different alternating trees, the edge that links the blossoms
is part of an augmenting path between the roots of the two trees.
![Figure 3](figures/graph3.png) <br>
*Figure 3: Growing alternating trees*
@ -302,7 +314,7 @@ The search procedure considers these vertices one-by-one and tries to use them t
either grow the alternating tree (thus adding new vertices to the queue),
or discover an augmenting path or a new blossom.
The search for an augmenting path proceeds as follows:
In detail, the search for an augmenting path proceeds as follows:
- Mark all top-level blossoms as _unlabeled_.
- Initialize an empty queue _Q_.
@ -457,9 +469,9 @@ $$ \pi_{x,y} = u_x + u_y + \sum_{(x,y) \in B} z_B - w_{x,y} $$
An edge is _tight_ if and only if its slack is zero.
Given the values of the dual variables, it is very easy to calculate the slack of an edge
which is not contained in any blossom: add the duals of its incident vertices and
which is not contained in any blossom: simply add the duals of its incident vertices and
subtract the weight.
To check whether an edge is tight, simply compute its slack and compare it to zero.
To check whether an edge is tight, simply compute its slack and check whether it is zero.
Calculating the slack of an edge that is contained in one or more blossoms is a little tricky,
but fortunately we don't need such calculations.
@ -492,7 +504,7 @@ At that point the maximum weight matching has been found.
When the matching algorithm is finished, the constraints can be checked to verify
that the matching is optimal.
This check is simpler and faster than the matching algorithm itself.
It can therefore be a useful way to guard against bugs in the algorithm.
It can therefore be a useful way to guard against bugs in the matching algorithm.
### Rules for updating dual variables
@ -522,12 +534,11 @@ It then changes dual variables as follows:
- _z<sub>B</sub> &larr; z<sub>B</sub> &#x2212; 2 * &delta;_ for every non-trivial T-blossom _B_
Dual variables of unlabeled blossoms and their vertices remain unchanged.
Dual variables _z<sub>B</sub>_ of non-trivial sub-blossoms also remain unchanged;
Dual variables _z<sub>B</sub>_ of non-trivial sub-blossoms also remain changed;
only top-level blossoms have their _z<sub>B</sub>_ updated.
Note that these rules ensure that no change occurs to the slack of any edge which is matched,
or part of an alternating tree, or contained in a blossom.
Such edges are tight and remain tight through the update.
Note that this update does not change the slack of edges that are either matched,
or linked in the alternating tree, or contained in a blossom.
However, the update reduces the slack of edges between S blossoms and edges between S-blossoms
and unlabeled blossoms.
It may cause some of these edges to become tight, allowing them to be used
@ -558,62 +569,25 @@ If the dual update is limited by _&delta;<sub>4</sub>_, it causes the dual varia
a T-blossom to become zero.
The next step is to expand that blossom.
### Discovering tight edges through delta steps
A delta step may find that _&delta; = 0_, implying that the dual variables don't change.
A dual update may find that _&delta; = 0_, implying that the dual variables don't change.
This can still be useful since all types of updates have side effects (adding an edge
to an alternating tree, or expanding a blossom) that allow the algorithm to make progress.
In fact, it is convenient to let the dual update mechanism drive the entire process of discovering
tight edges and growing alternating trees.
In my description of the search algorithm above, I stated that upon discovery of a tight edge
between a newly labeled S-vertex and an unlabeled vertex or a different S-blossom, the edge should
be used to grow the alternating tree or to create a new blossom or to form an augmenting path.
However, it turns out to be easier to postpone the use of such edges until the next delta step.
While scanning newly labeled S-vertices, edges to unlabeled vertices or different S-blossoms
are discovered but not yet used.
Such edges are merely registered in a suitable data structure.
Even if the edge is tight, it is registered rather than used right away.
Once the scan completes, a delta step will be done.
If any tight edges were discovered during the scan, the delta step will find that either
_&delta;<sub>2</sub> = 0_ or _&delta;<sub>3</sub> = 0_.
The corresponding step (growing the alternating tree, creating a blossom or augmenting
the matching) will occur at that point.
If no suitable tight edges exist, a real (non-zero) change of dual variables will occur.
The search for an augmenting path becomes as follows:
- Mark all top-level blossoms as _unlabeled_.
- Initialize an empty queue _Q_.
- Assign label S to all top-level blossoms that contain an unmatched vertex.
Add all vertices inside such blossoms to _Q_.
- Repeat until either an augmenting path is found or _&delta;<sub>1</sub> = 0_:
- Scan all vertices in Q as described earlier.
Register edges to unlabeled vertices or other S-blossoms.
Do not yet use such edges to change the alternating tree, even if the edge is tight.
- Calculate _&delta;_ and update dual variables as described above.
- If _&delta; = &delta;<sub>1</sub>_, end the search.
The maximum weight matching has been found.
- If _&delta; = &delta;<sub>2</sub>_, use the corresponding edge to grow the alternating tree.
Assign label T to the unlabeled blossom.
Then assign label S to its mate and add the new S-vertices to _Q_.
- If _&delta; = &delta;<sub>3</sub>_ and the corresponding edge connects two S-blossoms
in the same alternating tree, use the edge to create a new blossom.
Add the new S-vertices to _Q_.
- If _&delta; = &delta;<sub>3</sub>_ and the corresponding edge connects two S-blossoms
in different alternating trees, use the edge to construct an augmenting path.
End the search and return the augmenting path.
- If _&delta; = &delta;<sub>4</sub>_, expand the corresponding T-blossom.
It may seem complicated, but this is actually easier.
The code that scans newly labeled S-vertices, no longer needs special treatment of tight edges.
In general, multiple updates of the dual variables are necessary during a single _stage_ of
the algorithm.
Remember that a stage is the process of growing alternating trees until an augmenting path is found.
During a single _stage_, the algorithm may iterate several times between scanning tight edges and
updating dual variables.
These iterations are called _substages_.
To clarify: A stage is the process of growing alternating trees until an augmenting path is found.
A stage ends either by augmenting the matching, or by concluding that no further improvement
is possible.
Each stage consists of one or more substages.
A substage scans tight edges to grow the alternating trees.
When a substage runs out of tight edges, it ends by performing a dual variable update.
A substage also ends immediately when it finds an augmenting path.
At the end of each stage, labels and alternating trees are removed.
The matching algorithm ends when a substage ends in a dual variable update limited
by _&delta;<sub>1</sub>_.
At that point the matching has maximum weight.
### Expanding a blossom
@ -661,322 +635,161 @@ All vertices of sub-blossoms that got label S are inserted into _Q_.
![Figure 5](figures/graph5.png) <br>
*Figure 5: Expanding a T-blossom*
### Keeping track of the top-level blossom of each vertex
### Keeping track of least-slack edges
The algorithm often needs to find the top-level blossom _B(x)_ that contains a given vertex _x_.
A naive implementation may keep this information in an array where the element with
index _x_ holds a pointer to blossom _B(x)_.
Lookup in this array would be fast, but keeping the array up-to-date takes too much time.
There can be _O(n)_ stages, and _O(n)_ blossoms can be created or expanded during a stage,
and a blossom can contain _O(n)_ vertices,
therefore the total number of updates to the array could add up to _O(n<sup>3</sup>)_.
To solve this, we use a special data structure: a _concatenable priority queue_.
Each top-level blossom maintains an instance of this type of queue, containing its vertices.
Each vertex is a member in precisely one such queue.
To find the top-level blossom _B(x)_ that contains a given vertex _x_, we determine
the queue instance in which the vertex is a member.
This takes time _O(log n)_.
The queue instance corresponds directly to a specific blossom, which we can find
for example by storing a pointer to the blossom inside the queue instance.
When a new blossom is created, the concatenable queues of its sub-blossoms are merged
to form one concatenable queue for the new blossom.
The merged queue contains all vertices of the original queues.
Merging a pair of queues takes time _O(log n)_.
To merge the queues of _k_ sub-blossoms, the concatenation step is repeated _k-1_ times,
taking total time _O(k log n)_.
When a blossom is expanded, its concatenable queue is un-concatenated to recover separate queues
for the sub-blossoms.
This also takes time _O(log n)_ for each sub-blossom.
Implementation details of concatenable queues are discussed later in this document.
### Lazy updating of dual variables
During a delta step, the dual variables of labeled vertices and blossoms change as described above.
Updating these variables directly would take time _O(n)_ per delta step.
The total number of delta steps during a matching may be _&Theta;(n<sup>2</sup>)_,
pushing the total time to update dual variables to _O(n<sup>3</sup>)_ which is too slow.
To solve this, [[4]](#galil_micali_gabow1986) describes a technique that stores dual values
in a _modified_ form which is invariant under delta steps.
The modified values can be converted back to the true dual values when necessary.
[[7]](#mehlhorn_schafer2002) describes a slightly different technique which I find easier
to understand.
My implementation is very similar to theirs.
The first trick is to keep track of the running sum of _&delta;_ values since the beginning of the algorithm.
Let's call that number _&Delta;_.
At the start of the algorithm _&Delta; = 0_, but the value increases as the algorithm goes through delta steps.
For each non-trivial blossom, rather than storing its true dual value, we store a _modified_ dual value:
- For an S-blossom, the modified dual value is _z'<sub>B</sub> = z<sub>B</sub> - 2 &Delta;_
- For a T-blossom, the modified dual value is _z'<sub>B</sub> = z<sub>B</sub> + 2 &Delta;_
- For an unlabeled blossom or non-top-level blossom, the modified dual value is equal
to the true dual value.
These modified values are invariant under delta steps.
Thus, there is no need to update the stored values during a delta step.
Since the modified blossom dual value depends on the label (S or T) of the blossom,
the modified value must be updated whenever the label of the blossom changes.
This update can be done in constant time, and changing the label of a blossom is
in any case an explicit step, so this won't increase the asymptotic run time.
For each vertex, rather than storing its true dual value, we store a _modified_ dual value:
- For an S-vertex, the modified dual value is _u'<sub>x</sub> = u<sub>x</sub> + &Delta;_
- For a T-vertex, the modified dual value is _u'<sub>x</sub> = u<sub>x</sub> - offset<sub>B(x)</sub> - &Delta;_
- For an unlabeled vertex, the modified dual value is _u'<sub>x</sub> = u<sub>x</sub> - offset<sub>B(x)</sub>_
where _offset<sub>B</sub>_ is an extra variable which is maintained for each top-level blossom.
Again, the modified values are invariant under delta steps, which implies that no update
to the stored values is necessary during a delta step.
Since the modified vertex dual value depends on the label (S or T) of its top-level blossom,
an update is necessary when that label changes.
For S-vertices, we can afford to apply that update directly to the vertices involved.
This is possible since a vertex becomes an S-vertex at most once per stage.
The situation is more complicated for T-vertices.
During a stage, a T-vertex can become unlabeled if it is part of a T-blossom that gets expanded.
The same vertex can again become a T-vertex, then again become unlabeled during a subsequent
blossom expansion.
In this way, a vertex can transition between T-vertex and unlabeled vertex up to _O(n)_ times
within a stage.
We can not afford to update the stored modified vertex dual so many times.
This is where the _offset_ variables come in.
If a blossom becomes a T-blossom, rather than updating the modified duals of all vertices,
we update only the _offset_ variable of the blossom such that the modified vertex duals
remain unchanged.
If a blossom is expanded, we push the _offset_ values down to its sub-blossoms.
### Efficiently computing _&delta;_
To perform a delta step, the algorithm computes the values
To perform a dual variable update, the algorithm needs to compute the values
of _&delta;<sub>1</sub>_, _&delta;<sub>2</sub>_, _&delta;<sub>3</sub>_ and _&delta;<sub>4</sub>_
and determines which edge (_&delta;<sub>2</sub>_, _&delta;<sub>3</sub>_) or
and determine which edge (_&delta;<sub>2</sub>_, _&delta;<sub>3</sub>_) or
blossom (_&delta;<sub>4</sub>_) achieves the minimum value.
A naive implementation might compute _&delta;_ by looping over the vertices, blossoms and edges
in the graph.
The total number of delta steps during a matching may be _&Theta;(n<sup>2</sup>)_,
pushing the total time for _&delta;_ calculations to _O(n<sup>2</sup> m)_ which is much too slow.
[[4]](#galil_micali_gabow1986) introduces a combination of data structures from which
the value of _&delta;_ can be computed efficiently.
The total number of dual updates during a matching may be _&Theta;(n<sup>2</sup>)_.
Since we want to limit the total number of steps of the matching algorithm to _O(n<sup>3</sup>)_,
each dual update may use at most _O(n)_ steps.
_&delta;<sub>1</sub>_ is the minimum dual value of any S-vertex.
This value can be computed in constant time.
The dual value of an unmatched vertex is reduced by _&delta;_ during every delta step.
Since all vertex duals start with the same dual value _u<sub>start</sub>_,
all unmatched vertices have dual value _&delta;<sub>1</sub> = u<sub>start</sub> - &Delta;_.
_&delta;<sub>3</sub>_ is half of the minimum slack of any edge between two different S-blossoms.
To compute this efficiently, we keep edges between S-blossoms in a priority queue.
The edges are inserted into the queue during scanning of newly labeled S-vertices.
To compute _&delta;<sub>3</sub>_, we simply find the minimum-priority element of the queue.
We can find _&delta;<sub>1</sub>_ in _O(n)_ steps by simply looping over all vertices
and checking their dual variables.
We can find _&delta;<sub>4</sub>_ in _O(n)_ steps by simply looping over all non-trivial blossoms
(since there are fewer than _n_ non-trivial blossoms).
We could find _&delta;<sub>2</sub>_ and _&delta;<sub>3</sub>_ by simply looping over
all edges of the graph in _O(m)_ steps, but that exceeds our budget of _O(n)_ steps.
So we need better techniques.
A complication may occur when a new blossom is created.
Edges that connect different top-level S-blossoms before creation of the new blossom,
may end up as internal edges inside the newly created blossom.
This implies that such edges would have to be removed from the _&delta;<sub>3</sub>_ priority queue,
but that would be quite difficult.
Instead, we just let those edges stay in the queue.
When computing the value of _&delta;<sub>3</sub>_, we thus have to check whether the minimum
element represents an edge between different top-level blossoms.
If not, we discard such stale elements until we find an edge that does.
For _&delta;<sub>2</sub>_, we determine the least-slack edge between an S-blossom and unlabeled
blossom as follows.
For every vertex _y_ in any unlabeled blossom, keep track of _e<sub>y</sub>_:
the least-slack edge that connects _y_ to any S-vertex.
The thing to keep track of is the identity of the edge, not the slack value.
This information is kept up-to-date as part of the procedure that considers S-vertices.
The scanning procedure eventually considers all edges _(x, y, w)_ where _x_ is an S-vertex.
At that moment _e<sub>y</sub>_ is updated if the new edge has lower slack.
A complication occurs when dual variables are updated.
At that point, the slack of any edge between different S-blossoms decreases by _2\*&delta;_.
But we can not afford to update the priorities of all elements in the queue.
To solve this, we set the priority of each edge to its _modified slack_.
Calculating _&delta;<sub>2</sub>_ then becomes a matter of looping over all vertices _x_,
checking whether _B(x)_ is unlabeled and calculating the slack of _e<sub>x</sub>_.
The _modified slack_ of an edge is defined as follows:
One subtle aspect of this technique is that a T-vertex can loose its label when
the containing T-blossom gets expanded.
At that point, we suddenly need to have kept track of its least-slack edge to any S-vertex.
It is therefore necessary to do the same tracking also for T-vertices.
So the technique is: For any vertex that is not an S-vertex, track its least-slack edge
to any S-vertex.
$$ \pi'_{x,y} = u'_x + u'_y - w_{x,y} $$
Another subtle aspect is that a T-vertex may have a zero-slack edge to an S-vertex.
Even though these edges are already tight, they must still be tracked.
If the T-vertex loses its label, this edge needs to be reconsidered by the scanning procedure.
By including these edges in least-slack edge tracking, they will be rediscovered
through a _&delta;<sub>2</sub>=0_ update after the vertex becomes unlabeled.
The modified slack is computed in the same way as true slack, except it uses
the modified vertex duals instead of true vertex duals.
Blossom duals are ignored since we will never compute the modified slack of an edge that
is contained inside a blossom.
Because modified vertex duals are invariant under delta steps, so is the modified edge slack.
As a result, the priorities of edges in the priority queue remain unchanged during a delta step.
For _&delta;<sub>3</sub>_, we determine the least-slack edge between any pair of S-blossoms
as follows.
For every S-blossom _B_, keep track of _e<sub>B</sub>_:
the least-slack edge between _B_ and any other S-blossom.
Note that this is done per S-blossoms, not per S-vertex.
This information is kept up-to-date as part of the procedure that considers S-vertices.
Calculating _&delta;<sub>3</sub>_ then becomes a matter of looping over all S-blossoms _B_
and calculating the slack of _e<sub>B</sub>_.
_&delta;<sub>4</sub>_ is half of the minimum dual variable of any T-blossom.
To compute this efficiently, we keep non-trivial T-blossoms in a priority queue.
The blossoms are inserted into the queue when they become a T-blossom and removed from
the queue when they stop being a T-blossom.
A complication occurs when S-blossoms are merged.
Some of the least-slack edges of the sub-blossoms may be internal edges in the merged blossom,
and therefore irrelevant for _&delta;<sub>3</sub>_.
As a result, the proper _e<sub>B</sub>_ of the merged blossom may be different from all
least-slack edges of its sub-blossoms.
An additional data structure is needed to find _e<sub>B</sub>_ of the merged blossom.
A complication occurs when dual variables are updated.
At that point, the dual variable of any T-blossom decreases by _2\*&delta;_.
But we can not afford to update the priorities of all elements in the queue.
To solve this, we set the priority of each blossom to its _modified_ dual value
_z'<sub>B</sub> = z<sub>B</sub> + 2\*&Delta;_.
These values are invariant under delta steps.
For every S-blossom _B_, maintain a list _L<sub>B</sub>_ of edges between _B_ and
other S-blossoms.
The purpose of _L<sub>B</sub>_ is to contain, for every other S-blossom, the least-slack edge
between _B_ and that blossom.
These lists are kept up-to-date as part of the procedure that considers S-vertices.
While considering vertex _x_, if edge _(x, y, w)_ has positive slack,
and _B(y)_ is an S-blossom, the edge is inserted in _L<sub>B(x)</sub>_.
This may cause _L<sub>B(x)</sub>_ to contain multiple edges to _B(y)_.
That's okay as long as it definitely contains the least-slack edge to _B(y)_.
_&delta;<sub>2</sub>_ is the minimum slack of any edge between an S-vertex and unlabeled vertex.
To compute this efficiently, we use a fairly complicated strategy that involves
three levels of priority queues.
When a new S-blossom is created, form its list _L<sub>B</sub>_ by merging the lists
of its sub-blossoms.
Ignore edges that are internal to the merged blossom.
If there are multiple edges to the same target blossom, keep only the least-slack of these edges.
Then find _e<sub>B</sub>_ of the merged blossom by simply taking the least-slack edge
out of _L<sub>B</sub>_.
At the lowest level, every T-vertex or unlabeled vertex maintains a separate priority queue
of edges between itself and any S-vertex.
Edges are inserted into this queue during scanning of newly labeled S-vertices.
Note that S-vertices do not maintain this type of queue.
The priorities of edges in these queues are set to their _modified slack_.
This ensures that the priorities remain unchanged during delta steps.
The priorities also remain unchanged when the T-vertex becomes unlabeled or the unlabeled
vertex becomes a T-vertex.
At the middle level, every T-blossom or unlabeled top-level blossom maintains a priority queue
containing its vertices.
This is in fact the _concatenable priority queue_ that is maintained by every top-level blossom
as was described earlier in this document.
The priority of each vertex in the mid-level queue is set to the minimum priority of any edge
in the low-level queue of that vertex.
If edges are added to (or removed from) the low-level queue, the priority of the corresponding
vertex in the mid-level queue may change.
If the low-level queue of a vertex is empty, that vertex has priority _Infinity_
in the mid-level queue.
At the highest level, unlabeled top-level blossoms are tracked in one global priority queue.
The priority of each blossom in this queue is set to the minimum slack of any edge
from that blossom to an S-vertex plus _&Delta;_.
These priorities are invariant under delta steps.
To compute _&delta;<sub>2</sub>_, we find the minimum priority in the high-level queue
and adjust it by _&Delta;_.
To find the edge associated with _&delta;<sub>2</sub>_,
we use the high-level queue to find the unlabeled blossom with minimum priority,
then use that blossom's mid-level queue to find the vertex with minimum priority,
then use that vertex's low-level queue to find the edge with minimum priority.
The whole thing is a bit tricky, but it works.
### Re-using alternating trees
According to [[5]](#galil1986), labels and alternating trees should be erased at the end of each stage.
However, the algorithm can be optimized by keeping some of the labels and re-using them
in the next stage.
The optimized algorithm erases _only_ the two alternating trees that are part of
the augmenting path.
All blossoms in those two trees lose their labels and become free blossoms again.
Other alternating trees, which are not involved in the augmenting path, are preserved
into the next stage, and so are the labels on the blossoms in those trees.
This optimization is well known and is described for example in [[7]](#mehlhorn_schafer2002).
It does not affect the worst-case asymptotic run time of the algorithm,
but it provides a significant practical speed up for many types of graphs.
Erasing alternating trees is easy enough, but selectively stripping labels off blossoms
has a few implications.
The blossoms that lose their labels need to have their modified dual values updated.
The T-blossoms additionally need to have their _offset<sub>B</sub>_ variables updated
to keep the vertex dual values consistent.
For S-blossoms that lose their labels, the modified vertex dual variables are updated directly.
The various priority queues also need updating.
Former T-blossoms must be removed from the priority queue for _&delta;<sub>4</sub>_.
Edges incident on former S-vertices must be removed from the priority queues for _&delta;<sub>3</sub>_ and _&delta;<sub>2</sub>_.
Finally, S-vertices that become unlabeled need to construct a proper priority queue
of incident edges to other S-vertices for _&delta;<sub>2</sub>_ tracking.
This involves visiting every incident edge of every vertex in each S-blossom that loses its label.
## Run time of the algorithm
Every stage of the algorithm either increases the number of matched vertices by 2 or
ends the matching.
Therefore the number of stages is at most _n/2_.
Every stage runs in time _O((n + m) log n)_, therefore the complete algorithm runs in
time _O(n (n + m) log n)_.
Every stage runs in _O(n<sup>2</sup>)_ steps, therefore the complete algorithm runs in
_O(n<sup>3</sup>)_ steps.
During each stage, edge scanning is driven by the queue _Q_.
Every vertex enters _Q_ at most once.
Each vertex that enters _Q_ has its incident edges scanned, causing every edge in the graph
to be scanned at most twice per stage.
Scanning an edge is done in constant time, unless it leads to the discovery of a blossom
or an augmenting path, which will be separately accounted for.
Therefore edge scanning needs _O(m)_ steps per stage.
Creating a blossom reduces the number of top-level blossoms by at least 2,
thus limiting the number of simultaneously existing blossoms to _O(n)_.
Blossoms that are created during a stage become S-blossoms and survive until the end of that stage,
therefore _O(n)_ blossoms are created during a stage.
Creating a blossom with _k_ sub-blossoms reduces the number of top-level blossoms by _k-1_,
thus limiting the total number of sub-blossoms that can be involved in blossom creation
during a stage to _O(n)_.
Creating a blossom involves tracing the alternating path to the closest common ancestor,
which takes time _O(k log n)_ for a blossom with _k_ sub-blossoms.
It also involves bookkeeping per sub-blossom, which takes time _O(log n)_ per sub-blossom.
It also involves relabeling former T-vertices as S-vertices, but I account for that
time separately below so I can ignore it here.
It also involves merging the concatenable queues which track the vertices in top-level blossoms.
Merging two queues takes time _O(log n)_, therefore merging the queues of all sub-blossoms
takes time _O(k log n)_.
Creating a blossom thus takes time _O(k log n)_.
Blossom creation thus takes total time _O(n log n)_ per stage.
and some bookkeeping per sub-blossom,
and inserting new S-vertices _Q_,
all of which can be done in _O(n)_ steps per blossom creation.
The cost of managing least-slack edges between S-blossoms will be separately accounted for.
Therefore blossom creation needs _O(n<sup>2</sup>)_ steps per stage
(excluding least-slack edge management).
During each stage, a blossom becomes an S-blossom or T-blossom at most once.
A blossom also becomes unlabeled at most once, at the end of the stage.
Changing the label of a blossom takes some simple bookkeeping, as well as operations
on priority queues (_&delta;<sub>4</sub>_ for T-blossoms, _&delta;<sub>2</sub>_ for unlabeled
blossoms) which take time _O(log n)_ per blossom.
Assigning label S or removing label S also involves work for the vertices in the blossom
and their edges, but I account for that time separately below so I can ignore it here.
Blossom labeling thus takes total time _O(n log n)_ per stage.
As part of creating a blossom, a list _L<sub>B</sub>_ of least-slack edges must be formed.
This involves processing every element of all least-slack edge lists of its sub-blossoms,
and removing redundant edges from the merged list.
Merging and removing redundant edges can be done in one sweep via a temporary array indexed
by target blossom.
Collect the least-slack edges of the sub-blossoms into this array,
indexed by the target blossom of the edge,
keeping only the edge with lowest slack per target blossom.
Then convert the array back into a list by removing unused indices.
This takes _O(1)_ steps per candidate edge, plus _O(n)_ steps to manage the temporary array.
I choose to shift the cost of collecting the candidate edges from the sub-blossoms to
the actions that inserted those edges into the sub-blossom lists.
There are two processes which insert edges into _L<sub>B</sub>_: edge scanning and blossom
creation.
Edge scanning inserts each graph edge at most twice per stage for a total cost of _O(m)_ steps
per stage.
Blossom creation inserts at most _O(n)_ edges per blossom creation.
Therefore the total cost of S-blossom least-slack edge management is
_O(m + n<sup>2</sup>) = O(n<sup>2</sup>)_ steps per stage.
During each stage, a vertex becomes an S-vertex at most once, and an S-vertex becomes
unlabeled at most once.
In both cases, the incident edges of the affected vertex are scanned and potentially
added to or removed from priority queues.
This involves finding the top-level blossoms of the endpoints of each edge, which
takes time _O(log n)_ per edge.
The updates to priority queues also take time _O(log n)_ per edge.
Edge scanning thus takes total time _O(m log n)_ per stage.
Note that _m &le; n<sup>2</sup>_ therefore _log m &le; 2 log n_.
This implies that an operation on a priority queue with _m_ elements takes time _O(log n)_.
Expanding a blossom involves some bookkeeping which takes time _O(log n)_ per sub-blossom.
It also involves splitting the concatenable queue that tracks the vertices in top-level blossoms,
which takes time _O(log n)_ per sub-blossom.
In case of a T-blossom, it also involves reconstructing the alternating path through
the blossom which takes time _O(k log n)_ for _k_ sub-blossoms.
Also in case of a T-blossom, some sub-blossoms will become S-blossoms and their
vertices become S-vertices, but I have already accounted for that cost above
so I can ignore it here.
Expanding a blossom thus takes time _O(k log n)_.
Any blossom is involved as a sub-blossom in an expanding blossom at most once per stage.
Blossom expansion thus takes total time _O(n log n)_ per stage.
The number of blossom expansions during a stage is _O(n)_.
Expanding a blossom involves some bookkeeping per sub-blossom,
and reconstructing the alternating path through the blossom,
and inserting any new S-vertices into _Q_,
all of which can be done in _O(n)_ steps per blossom.
Therefore blossom expansion needs _O(n<sup>2</sup>)_ steps per stage.
The length of an augmenting path is _O(n)_.
Tracing the augmenting path and augmenting the matching along the path can be done
in _O(n log n)_ steps.
Augmenting through a blossom takes a number of steps that is proportional to the number of
Tracing the augmenting path and augmenting the matching along the path can be done in _O(n)_ steps.
Augmenting through a blossom takes a number of steps that is proportional in the number of
its sub-blossoms.
Since there are fewer than _n_ non-trivial blossoms, the total cost of augmenting through
blossoms is _O(n)_ steps.
Augmenting thus takes total time _O(n log n)_ per stage.
Therefore the total cost of augmenting is _O(n)_ steps per stage.
A delta step limited by _&delta;<sub>1</sub>_ ends the algorithm and therefore happens at most once.
A _&delta;<sub>2</sub>_ step assigns a label to a previously unlabeled blossom and therefore
happens _O(n)_ times per stage.
A _&delta;<sub>3</sub>_ step either creates a blossom or finds an augmenting path and therefore
happens _O(n)_ times per stage.
A _&delta;<sub>4</sub>_ step expands a blossom and therefore happens _O(n)_ times per stage.
Therefore the number of delta steps is _O(n)_ per stage.
A dual variable update limited by _&delta;<sub>1</sub>_ ends the algorithm and therefore
happens at most once.
An update limited by _&delta;<sub>2</sub>_ labels a previously labeled blossom
and therefore happens _O(n)_ times per stage.
An update limited by _&delta;<sub>3</sub>_ either creates a blossom or finds an augmenting path
and therefore happens _O(n)_ times per stage.
An update limited by _&delta;<sub>4</sub>_ expands a blossom and therefore happens
_O(n)_ times per stage.
Therefore the number of dual variable updates is _O(n)_ per stage.
The cost of calculating the _&delta;_ values is _O(n)_ per update as discussed above.
Applying changes to the dual variables can be done by looping over all vertices and looping over
all top-level blossoms in _O(n)_ steps per update.
Therefore the total cost of dual variable updates is _O(n<sup>2</sup>)_ per stage.
Calculating _&delta;<sub>1</sub>_ takes constant time.
Calculating _&delta;<sub>2</sub>_ and _&delta;<sub>4</sub>_ requires a constant number
of lookups in priority queues which takes time _O(log n)_ per delta step.
During calculation of _&delta;<sub>3</sub>_, it may be necessary to remove stage edges
from the priority queue.
Since every edge is inserted into the priority queue at most once per stage,
at most _O(m)_ edges are removed per stage, which takes total time _O(m log n)_ per stage.
Calculation of _&delta;_ thus takes total time _O((n + m) log n)_ per stage.
Applying updates to dual variables is done in a lazy fashion as discussed above.
The only variable that is updated directly is _&Delta;_, which takes time _O(1)_ per delta step.
Updating dual variables thus takes total time _O(n)_ per stage.
## Implementation details
@ -1006,100 +819,17 @@ Every edge therefore appears in two adjacency lists.
These data structures are initialized at the start of the matching algorithm
and never change while the algorithm runs.
#### Priority queue
Priority queues are used for a number of purposes:
- a priority queue to find the least-slack edge between S-blossoms;
- a priority queue to find the minimum-dual T-blossom;
- a priority queue to find the unlabeled blossom with least-slack edge to an S-blossom;
- a separate priority queue per vertex to find the least-slack edge between that vertex
and any S-vertex.
These queues are implemented as a binary heaps.
This type of queue supports the following operations:
- _insert_ a new element with specified priority in time _O(log n)_;
- find the element with _minimum_ priority in time _O(1)_;
- _delete_ a specified element in time _O(log n)_;
- _change_ the priority of a specified element in time _O(log n)_.
#### Concatenable priority queue
Each top-level blossom maintains a concatenable priority queue containing its vertices.
We use a specific type of concatenable queue that supports the following operations
[[4]](#galil_micali_gabow1986) [[8]](#aho_hopcroft_ullman1974):
- _create_ a new queue containing 1 new element;
- find the element with _minimum_ priority in time _O(1)_;
- _change_ the priority of a given element in time _O(log n)_;
- _merge_ two queues into one new queue in time _O(log n)_;
- _split_ a queue, thus undoing the previous _merge_ step in time _O(log n)_.
The efficient _merge_ and _split_ operations make it possible to adapt the queue during
blossom creation and blossom expansion steps.
The priorities in the queue are used to find, for a given top-level blossom, its vertex
with least-slack edge to an S-blossom.
The concatenable queue is implemented as a balanced tree, specifically a _2-3 tree_.
Each internal node of a 2-3 tree has either 2 or 3 children.
The leaf nodes of the tree represent the elements of the queue.
All leaf nodes are at the same distance from the root.
Each node has a pointer to its parent, and each internal node has pointers to its children.
Each internal node also stores its height (distance to its leaf nodes).
Only leaf nodes have a priority.
However, each internal node maintains a pointer to the leaf node with minimum priority
within its subtree.
As a consequence, the root of the tree has a pointer to the least-priority element in the queue.
To keep this information consistent, any change in the priority of a leaf node must
be followed by updating the internal nodes along a path from the leaf node to the root.
The same must be done when the structure of the tree is adjusted.
The left-to-right order of the leaf nodes is preserved during all operations, including _merge_
and _split_.
When trees _A_ and _B_ are merged, the sequence of leaf nodes in the merged tree will consist of
the leaf nodes of _A_ followed by the leaf nodes of _B_.
Note that the left-to-right order of the leaf nodes is unrelated to the priorities of the elements.
To merge two trees, the root of the smaller tree is inserted as a child of an appropriate node
in the larger tree.
Subsequent adjustments are needed restore the consistency the 2-3 tree and to update
the minimum-element pointers of the internal nodes along a path from the insertion point
to the root of the merged tree.
This can be done in a number of steps proportional to the difference in height between
the two trees, which is in any case _O(log n)_.
To split a tree, a _split node_ is identified: the left-most leaf node that must end up
in the right-side tree after splitting.
Internal nodes are deleted along the path from the _split node_ to the root of the tree.
This creates a forest of disconnected subtrees on the left side of the path,
and a similar forest of subtrees on the right side of the split path.
The left-side subtrees are reassembled into a single tree through a series of _merge_ steps.
Although _O(log n)_ merge steps may be needed, the total time required for reassembly
is also _O(log n)_ because each merge step combines trees of similar height.
A similar reassembly is done for the forest on the right side of the split path.
The concatenable queues have an additional purpose in the matching algorithm:
finding the top-level blossom that contains a given vertex.
To do this, we assign a _name_ to each concatenable queue instance, which is simply
a pointer to the top-level blossom that maintains the queue.
An extra operation is defined:
_find_ the name of the queue instance that contains a given element in time _O(log n)_.
Implementing the _find_ operation is easy:
Starting at the leaf node that represents the element, follow _parent_ pointers
to the root of the tree.
The root node contains the name of the queue.
#### General data
`vertex_mate[x] = y` if the edge between vertex _x_ and vertex _y_ is matched. <br>
`vertex_mate[x] = -1` if vertex _x_ is unmatched.
`vertex_dual[x]` holds the modified vertex dual _u'<sub>x</sub>_.
`vertex_top_blossom[x] =` pointer to _B(x)_, the top-level blossom that contains vertex _x_.
A global list holds vertex indices of S-vertices whose edges have not yet been scanned.
Vertices are inserted in this list when their top-level blossom gets label S.
`vertex_dual[x]` holds the value of _u<sub>x</sub>_.
A FIFO queue holds vertex indices of S-vertices whose edges have not yet been scanned.
Vertices are inserted in this queue as soon as their top-level blossom gets label S.
#### Blossoms
@ -1112,28 +842,56 @@ Both types of blossoms are represented as class instances with the following att
* `B.label` is `S` or `T` or `None`
* `B.tree_edge = (x, y)` if _B_ is a labeled top-level blossom, where _y_ is a vertex in _B_
and _(x, y)_ is the edge that links _B_ to its parent in the alternating tree.
* `B.tree_blossoms` points to a list of blossoms in the same alternating tree, if _B_
is a labeled top-level blossom.
* `B.vertex_dual_offset` holds the pending change to vertex duals _offset<sub>B</sub>_.
A non-trivial blossom additionally has the following attributes:
* `B.subblossoms` is an array of pointers to the sub-blossoms of _B_,
starting with the sub-blossom that contains the base vertex.
* `B.edges` is an array of alternating edges connecting the sub-blossoms.
* `B.dual_var` holds the modified blossom dual _z'<sub>B</sub>_.
* `B.dual_var` holds the value of _z<sub>B</sub>_.
Single-vertex blossoms are kept in an array indexed by vertex index. <br>
Non-trivial blossoms are kept in a separate list. <br>
Non-trivial blossoms are kept in a separate array. <br>
These arrays are used to iterate over blossoms and to find the trivial blossom
that consists of a given vertex.
#### Least-slack edge tracking
`vertex_best_edge[x]` is an array holding _e<sub>x</sub>_, the edge index of
the least-slack edge between vertex _x_ and any S-vertex, or -1 if there is no such edge.
This value is only meaningful if _x_ is a T-vertex or unlabeled vertex.
`B.best_edge` is a blossom attribute holding _e<sub>B</sub>_, the edge index of the least-slack
edge between blossom _B_ and any other S-blossom, or -1 if there is no such edge.
This value is only meaningful if _B_ is a top-level S-blossom.
For non-trivial S-blossoms _B_, attribute `B.best_edge_set` holds the list _L<sub>B</sub>_
of potential least-slack edges to other blossoms.
This list is not maintained for single-vertex blossoms, since _L<sub>B</sub>_ of a single vertex
can be efficiently constructed from its adjacency list.
#### Memory usage
The data structures described above use a constant amount of memory per vertex and per edge
and per blossom.
Therefore the total memory requirement is _O(m + n)_.
The memory usage of _L<sub>B</sub>_ is a little tricky.
Any given list _L<sub>B</sub>_ can have length _O(n)_, and _O(n)_ of these lists can exist
simultaneously.
Naively allocating space for _O(n)_ elements per list will drive memory usage
up to _O(n<sup>2</sup>)_.
However, at any moment, an edge can be in at most two of these lists, therefore the sum
of the lengths of these lists is limited to _O(m)_.
A possible solution is to implement the _L<sub>B</sub>_ as linked lists.
### Performance critical routines
Calculations that happen very frequently in the algorithm are:
determining the top-level blossom of a given vertex, and calculating the slack of a given edge.
These calculations must run in constant time per call in any case, but it makes sense to put
some extra effort into making these calculations _fast_.
### Recursion
Certain tasks in the algorithm are recursive in nature:
@ -1190,41 +948,59 @@ Proof by induction that all vertex duals are multiples of 0.5 and all blossom du
- Blossom duals increase or decrease by _2\*&delta;_,
therefore updated blossom duals are still integers.
It is useful to know that (modified) dual variables and (modified) edge slacks
are limited to a finite range of values which depends on the maximum edge weight.
This may be helpful when choosing an integer data type for these variables.
The value of vertex dual variables and blossom dual variables never exceeds the
greatest edge weight in the graph.
This may be helpful for choosing an integer data type for the dual variables.
(Alternatively, choose a programming language with unlimited integer range.
This is perhaps the thing I love most about Python.)
- The value of _&Delta;_ (sum over _&delta;_ steps) does not exceed _maxweight / 2_.
Proof:
- Vertex dual variables start at _u<sub>x</sub> = maxweight_ / 2.
- While the algorithm runs, there is at least one vertex which has been unmatched
since the beginning.
This vertex has always had label S, therefore its dual is _maxweight/2 - &Delta;_.
Vertex deltas can not be negative, therefore _&Delta; &le; maxweight/2_.
- Vertex duals are limited to the range 0 to _maxweight_.
- Blossom duals are limited to the range 0 to _maxweight_.
- Edge slack is limited to the range 0 to _2\*maxweight_.
- Modified vertex duals are limited to the range 0 to _1.5\*maxweight_.
- Modified blossom duals are limited to the range _-maxweight to 2\*maxweight_.
- Modified edge slack is limited to the range 0 to _3\*maxweight_.
- Dual offsets are limited to the range _-maxweight/2_ to _maxweight/2_.
Proof that dual variables do not exceed _max-weight_:
- Vertex dual variables start at _u<sub>x</sub> = 0.5\*max-weight_.
- While the algorithm runs, there is at least one vertex which has been unmatched
since the beginning.
This vertex has always had label S, therefore its dual always decreased by _&delta;_
during a dual variable update.
Since it started at _0.5\*max-weight_ and can not become negative,
the sum of _&delta;_ over all dual variable updates can not exceed _0.5\*max-weight_.
- Vertex dual variables increase by at most _&delta;_ per update.
Therefore no vertex dual can increase by more than _0.5\*max-weight_ in total.
Therefore no vertex dual can exceed _max-weight_.
- Blossom dual variables start at _z<sub>B</sub> = 0_.
- Blossom dual variables increase by at most _2\*&delta;_ per update.
Therefore no blossom dual can increase by more than _max-weight_ in total.
Therefore no blossom dual can exceed _max-weight_.
### Handling floating point edge weights
Floating point calculations are subject to rounding errors.
As a result, the algorithm may return a matching which has slightly lower weight than
the actual maximum weight.
This has two consequences for the matching algorithm:
The algorithm will allways return a valid matching, even if rounding errors occur.
Floating point comparisons affect which actions are taken during delta steps,
and thus eventually determine which edges are matched.
But the overall structure of the algorithm guarantees that it will eventually return
a valid (if possibly suboptimal) matching.
- The algorithm may return a matching which has slightly lower weight than
the actual maximum weight.
The most challenging cases are probably graphs with edge weights that differ by many orders
of magnitude.
- The algorithm may not reliably recognize tight edges.
To check whether an edge is tight, its slack is compared to zero.
Rounding errors may cause the slack to appear positive even when an exact calculation
would show it to be zero.
The slack of some edges may even become slightly negative.
I believe this does not affect the correctness of the algorithm.
An edge that should be tight but is not recognized as tight due to rounding errors,
can be pulled tight through an additional dual variable update.
As side-effect of this update, the edge will immediately be used to grow the alternating tree,
or construct a blossom or augmenting path.
This mechanism allows the algorithm to make progress, even if slack comparisons
are repeatedly thrown off by rounding errors.
Rounding errors may cause the algorithm to perform more dual variable updates
than strictly necessary.
But this will still not cause the run time of the algorithm to exceed _O(n<sup>3</sup>)_.
It seems to me that the matching algorithm is stable for floating point weights.
And it seems to me that it returns a matching which is close to optimal,
and could have been optimal if edge weights were changed by small amounts.
I must admit these arguments are mostly based on intuition.
Unfortunately I don't know how to properly analyze the floating point accuracy of this algorithm.
### Finding a maximum weight matching out of all maximum cardinality matchings
@ -1281,14 +1057,7 @@ changing all weights by the same amount doesn't change which of these matchings
([link](https://dl.acm.org/doi/abs/10.5555/320176.320229))
([pdf](https://dl.acm.org/doi/pdf/10.5555/320176.320229))
7. <a id="mehlhorn_schafer2002"></a>
Kurt Mehlhorn, Guido Schäfer, "Implementation of O(nm log(n)) Weighted Matchings in General Graphs: The Power of Data Structures", _Journal of Experimental Algorithmics vol. 7_, 2002.
([link](https://dl.acm.org/doi/10.1145/944618.944622))
8. <a id="aho_hopcroft_ullman1974"></a>
Alfred V. Aho, John E. Hopcroft, Jeffrey D. Ullman,
_The Design and Analysis of Computer Algorithms_, Addison-Wesley, 1974.
---
Written in 2023-2024 by Joris van Rantwijk.
Written in 2023 by Joris van Rantwijk.
This work is licensed under [CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0/).

View File

@ -10,7 +10,7 @@ import math
from collections.abc import Sequence
from typing import NamedTuple, Optional
from .datastruct import ConcatenableQueue, PriorityQueue
from .datastruct import UnionFindQueue, PriorityQueue
def maximum_weight_matching(
@ -391,10 +391,9 @@ class Blossom:
# all top-level blossoms in the tree.
self.tree_blossoms: Optional[set[Blossom]] = None
# Each top-level blossom maintains a concatenable queue containing
# all vertices in the blossom.
self.vertex_queue: ConcatenableQueue[Blossom, int]
self.vertex_queue = ConcatenableQueue(self)
# Each top-level blossom maintains a union-find datastructure
# containing all vertices in the blossom.
self.vertex_set: UnionFindQueue[Blossom, int] = UnionFindQueue(self)
# If this is a top-level unlabeled blossom with an edge to an
# S-blossom, "delta2_node" is the corresponding node in the delta2
@ -470,9 +469,6 @@ class NonTrivialBlossom(Blossom):
# The value of the dual variable changes through delta steps,
# but these changes are implemented as lazy updates.
#
# blossom.dual_var holds the modified blossom dual value.
# The modified blossom dual is invariant under delta steps.
#
# The true dual value of a top-level S-blossom is
# blossom.dual_var + ctx.delta_sum_2x
#
@ -482,6 +478,7 @@ class NonTrivialBlossom(Blossom):
# The true dual value of any other type of blossom is simply
# blossom.dual_var
#
# Note that "dual_var" is invariant under delta steps.
self.dual_var: float = 0
# If this is a top-level T-blossom,
@ -556,13 +553,12 @@ class MatchingContext:
# Initially there are no non-trivial blossoms.
self.nontrivial_blossom: set[NonTrivialBlossom] = set()
# "vertex_queue_node[x]" represents the vertex "x" inside the
# concatenable queue of its top-level blossom.
# "vertex_set_node[x]" represents the vertex "x" inside the
# union-find datastructure of its top-level blossom.
#
# Initially, each vertex belongs to its own trivial top-level blossom.
self.vertex_queue_node = [
b.vertex_queue.insert(i, math.inf)
for (i, b) in enumerate(self.trivial_blossom)]
self.vertex_set_node = [b.vertex_set.insert(i, math.inf)
for (i, b) in enumerate(self.trivial_blossom)]
# All vertex duals are initialized to half the maximum edge weight.
#
@ -577,9 +573,6 @@ class MatchingContext:
# The value of the dual variable changes through delta steps,
# but these changes are implemented as lazy updates.
#
# vertex_dual_2x[x] holds 2 times the modified vertex dual value of
# vertex "x". The modified vertex dual is invariant under delta steps.
#
# The true dual value of an S-vertex is
# (vertex_dual_2x[x] - delta_sum_2x) / 2
#
@ -589,6 +582,7 @@ class MatchingContext:
# The true dual value of an unlabeled vertex is
# (vertex_dual_2x[x] + B(x).vertex_dual_offset) / 2
#
# Note that "vertex_dual_2x" is invariant under delta steps.
self.vertex_dual_2x: list[float]
self.vertex_dual_2x = num_vertex * [self.start_vertex_dual_2x]
@ -628,19 +622,8 @@ class MatchingContext:
for blossom in itertools.chain(self.trivial_blossom,
self.nontrivial_blossom):
blossom.parent = None
blossom.vertex_queue.clear()
del blossom.vertex_queue
#
# Find top-level blossom:
#
def top_level_blossom(self, x: int) -> Blossom:
"""Find the top-level blossom that contains vertex "x".
This function takes time O(log(n)).
"""
return self.vertex_queue_node[x].find()
blossom.vertex_set.clear()
del blossom.vertex_set
#
# Least-slack edge tracking:
@ -652,14 +635,14 @@ class MatchingContext:
The pseudo-slack of an edge is related to its true slack, but
adjusted in a way that makes it invariant under delta steps.
The true slack of an edge between to S-vertices in different
top-level blossoms is
edge_pseudo_slack_2x(e) / 2 - delta_sum_2x
If the edge connects two S-vertices in different top-level blossoms,
the true slack is the pseudo-slack minus 2 times the running sum
of delta steps.
The true slack of an edge between an S-vertex and an unlabeled
vertex "y" inside top-level blossom B(y) is
(edge_pseudo_slack_2x(e)
- delta_sum_2x + B(y).vertex_dual_offset) / 2
If the edge connects an S-vertex to an unlabeled vertex,
the true slack is the pseudo-slack minus the running sum of delta
steps, plus the pending offset of the top-level blossom that contains
the unlabeled vertex.
"""
(x, y, w) = self.graph.edges[e]
return self.vertex_dual_2x[x] + self.vertex_dual_2x[y] - 2 * w
@ -685,8 +668,8 @@ class MatchingContext:
if not improved:
return
# Update the priority of "y" in its ConcatenableQueue.
self.vertex_queue_node[y].set_prio(prio)
# Update the priority of "y" in its UnionFindQueue.
self.vertex_set_node[y].set_prio(prio)
# If the blossom is unlabeled and the new edge becomes its least-slack
# S-edge, insert or update the blossom in the global delta2 queue.
@ -718,13 +701,13 @@ class MatchingContext:
else:
prio = vertex_sedge_queue.find_min().prio
# If necessary, update priority of "y" in its ConcatenableQueue.
if prio > self.vertex_queue_node[y].prio:
self.vertex_queue_node[y].set_prio(prio)
# If necessary, update the priority of "y" in its UnionFindQueue.
if prio > self.vertex_set_node[y].prio:
self.vertex_set_node[y].set_prio(prio)
if by.label == LABEL_NONE:
# Update or delete the blossom in the global delta2 queue.
assert by.delta2_node is not None
prio = by.vertex_queue.min_prio()
prio = by.vertex_set.min_prio()
if prio < math.inf:
prio += by.vertex_dual_offset
if prio > by.delta2_node.prio:
@ -744,7 +727,7 @@ class MatchingContext:
This function takes time O(log(n)).
"""
assert blossom.delta2_node is None
prio = blossom.vertex_queue.min_prio()
prio = blossom.vertex_set.min_prio()
if prio < math.inf:
prio += blossom.vertex_dual_offset
blossom.delta2_node = self.delta2_queue.insert(prio, blossom)
@ -775,7 +758,7 @@ class MatchingContext:
self.vertex_sedge_queue[x].clear()
for e in self.graph.adjacent_edges[x]:
self.vertex_sedge_node[e] = None
self.vertex_queue_node[x].set_prio(math.inf)
self.vertex_set_node[x].set_prio(math.inf)
def delta2_get_min_edge(self) -> tuple[int, float]:
"""Find the least-slack edge between any S-vertex and any unlabeled
@ -798,7 +781,7 @@ class MatchingContext:
assert blossom.parent is None
assert blossom.label == LABEL_NONE
x = blossom.vertex_queue.min_elem()
x = blossom.vertex_set.min_elem()
e = self.vertex_sedge_queue[x].find_min().data
return (e, slack_2x)
@ -854,8 +837,8 @@ class MatchingContext:
delta3_node = self.delta3_queue.find_min()
e = delta3_node.data
(x, y, _w) = self.graph.edges[e]
bx = self.top_level_blossom(x)
by = self.top_level_blossom(y)
bx = self.vertex_set_node[x].find()
by = self.vertex_set_node[y].find()
assert (bx.label == LABEL_S) and (by.label == LABEL_S)
if bx is not by:
slack = delta3_node.prio - self.delta_sum_2x
@ -1017,7 +1000,7 @@ class MatchingContext:
# and vertex "x" is no longer an S-vertex.
self.delta3_remove_edge(e)
by = self.top_level_blossom(y)
by = self.vertex_set_node[y].find()
if by.label == LABEL_S:
# Edge "e" connects unlabeled vertex "x" to S-vertex "y".
# It must be tracked for delta2 via vertex "x".
@ -1137,7 +1120,7 @@ class MatchingContext:
while x != -1 or y != -1:
# Check if we found a common ancestor.
bx = self.top_level_blossom(x)
bx = self.vertex_set_node[x].find()
if bx.marker:
first_common = bx
break
@ -1165,8 +1148,8 @@ class MatchingContext:
# If we found a common ancestor, trim the paths so they end there.
if first_common is not None:
assert self.top_level_blossom(xedges[-1][0]) is first_common
while (self.top_level_blossom(yedges[-1][0])
assert self.vertex_set_node[xedges[-1][0]].find() is first_common
while (self.vertex_set_node[yedges[-1][0]].find()
is not first_common):
yedges.pop()
@ -1203,12 +1186,12 @@ class MatchingContext:
assert len(path.edges) >= 3
# Construct the list of sub-blossoms (current top-level blossoms).
subblossoms = [self.top_level_blossom(x) for (x, y) in path.edges]
subblossoms = [self.vertex_set_node[x].find() for (x, y) in path.edges]
# Check that the path is cyclic.
# Note the path will not always start and end with the same _vertex_,
# but it must start and end in the same _blossom_.
subblossoms_next = [self.top_level_blossom(y)
subblossoms_next = [self.vertex_set_node[y].find()
for (x, y) in path.edges]
assert subblossoms[0] == subblossoms_next[-1]
assert subblossoms[1:] == subblossoms_next[:-1]
@ -1252,8 +1235,8 @@ class MatchingContext:
sub.tree_blossoms = None
tree_blossoms.remove(sub)
# Merge concatenable queues.
blossom.vertex_queue.merge([sub.vertex_queue for sub in subblossoms])
# Merge union-find structures.
blossom.vertex_set.merge([sub.vertex_set for sub in subblossoms])
@staticmethod
def find_path_through_blossom(
@ -1297,8 +1280,8 @@ class MatchingContext:
# Remove blossom from the delta2 queue.
self.delta2_disable_blossom(blossom)
# Split concatenable queue.
blossom.vertex_queue.split()
# Split union-find structure.
blossom.vertex_set.split()
# Prepare to push lazy delta updates down to the sub-blossoms.
vertex_dual_offset = blossom.vertex_dual_offset
@ -1315,7 +1298,7 @@ class MatchingContext:
self.delta2_enable_blossom(sub)
# Avoid leaking a reference cycle.
del blossom.vertex_queue
del blossom.vertex_set
# Delete the expanded blossom.
self.nontrivial_blossom.remove(blossom)
@ -1351,7 +1334,7 @@ class MatchingContext:
# the alternating tree.
assert blossom.tree_edge is not None
(x, y) = blossom.tree_edge
sub = self.top_level_blossom(y)
sub = self.vertex_set_node[y].find()
# Assign label T to that sub-blossom.
self.assign_blossom_label_t(sub)
@ -1489,14 +1472,14 @@ class MatchingContext:
def augment_matching(self, path: AlternatingPath) -> None:
"""Augment the matching through the specified augmenting path.
This function takes time O(n * log(n)).
This function takes time O(n).
"""
# Check that the augmenting path starts and ends in
# an unmatched vertex or a blossom with unmatched base.
assert len(path.edges) % 2 == 1
for x in (path.edges[0][0], path.edges[-1][1]):
b = self.top_level_blossom(x)
b = self.vertex_set_node[x].find()
assert self.vertex_mate[b.base_vertex] == -1
# The augmenting path looks like this:
@ -1514,11 +1497,11 @@ class MatchingContext:
# Augment the non-trivial blossoms on either side of this edge.
# No action is necessary for trivial blossoms.
bx = self.top_level_blossom(x)
bx = self.vertex_set_node[x].find()
if isinstance(bx, NonTrivialBlossom):
self.augment_blossom(bx, self.trivial_blossom[x])
by = self.top_level_blossom(y)
by = self.vertex_set_node[y].find()
if isinstance(by, NonTrivialBlossom):
self.augment_blossom(by, self.trivial_blossom[y])
@ -1543,14 +1526,14 @@ class MatchingContext:
"""
# Assign label S to the blossom that contains vertex "x".
bx = self.top_level_blossom(x)
bx = self.vertex_set_node[x].find()
self.assign_blossom_label_s(bx)
# Vertex "x" is matched to T-vertex "y".
y = self.vertex_mate[x]
assert y != -1
by = self.top_level_blossom(y)
by = self.vertex_set_node[y].find()
assert by.label == LABEL_T
assert by.tree_blossoms is not None
@ -1573,14 +1556,14 @@ class MatchingContext:
- There is a tight edge between vertices "x" and "y".
"""
bx = self.top_level_blossom(x)
by = self.top_level_blossom(y)
bx = self.vertex_set_node[x].find()
by = self.vertex_set_node[y].find()
assert bx.label == LABEL_S
# Expand zero-dual blossoms before assigning label T.
while isinstance(by, NonTrivialBlossom) and (by.dual_var == 0):
self.expand_unlabeled_blossom(by)
by = self.top_level_blossom(y)
by = self.vertex_set_node[y].find()
# Assign label T to the unlabeled blossom.
self.assign_blossom_label_t(by)
@ -1614,8 +1597,8 @@ class MatchingContext:
True if the matching was augmented; otherwise False.
"""
bx = self.top_level_blossom(x)
by = self.top_level_blossom(y)
bx = self.vertex_set_node[x].find()
by = self.vertex_set_node[y].find()
assert bx.label == LABEL_S
assert by.label == LABEL_S
@ -1679,7 +1662,7 @@ class MatchingContext:
for x in self.scan_queue:
# Double-check that "x" is an S-vertex.
bx = self.top_level_blossom(x)
bx = self.vertex_set_node[x].find()
assert bx.label == LABEL_S
# Scan the edges that are incident on "x".
@ -1699,7 +1682,7 @@ class MatchingContext:
# Try to pull this edge into an alternating tree.
# Ignore edges that are internal to a blossom.
by = self.top_level_blossom(y)
by = self.vertex_set_node[y].find()
if bx is by:
continue
@ -1794,7 +1777,7 @@ class MatchingContext:
"""
for x in range(self.graph.num_vertex):
assert self.vertex_mate[x] == -1
bx = self.top_level_blossom(x)
bx = self.vertex_set_node[x].find()
assert bx.base_vertex == x
# Assign label S.
@ -1845,7 +1828,7 @@ class MatchingContext:
# Use the edge from S-vertex to unlabeled vertex that got
# unlocked through the delta update.
(x, y, _w) = self.graph.edges[delta_edge]
if self.top_level_blossom(x).label != LABEL_S:
if self.vertex_set_node[x].find().label != LABEL_S:
(x, y) = (y, x)
self.extend_tree_s_to_t(x, y)

View File

@ -11,11 +11,11 @@ _ElemT = TypeVar("_ElemT")
_ElemT2 = TypeVar("_ElemT2")
class ConcatenableQueue(Generic[_NameT, _ElemT]):
"""Priority queue supporting efficient merge and split operations.
class UnionFindQueue(Generic[_NameT, _ElemT]):
"""Combination of disjoint set and priority queue.
This is a combination of a disjoint set and a priority queue.
A queue has a "name", which can be any Python object.
Each element has associated "data", which can be any Python object.
Each element has a priority.
@ -27,70 +27,68 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
- Merge two or more queues.
- Undo a previous merge step.
This data structure is implemented as a 2-3 tree with minimum-priority
The implementation is essentially an AVL tree, with minimum-priority
tracking added to it.
"""
__slots__ = ("name", "tree", "first_node", "sub_queues")
__slots__ = ("name", "tree", "sub_queues", "split_nodes")
class BaseNode(Generic[_NameT2, _ElemT2]):
"""Node in the 2-3 tree."""
class Node(Generic[_NameT2, _ElemT2]):
"""Node in a UnionFindQueue."""
__slots__ = ("owner", "min_node", "height", "parent", "childs")
__slots__ = ("owner", "data", "prio", "min_node", "height",
"parent", "left", "right")
def __init__(self,
min_node: ConcatenableQueue.Node[_NameT2, _ElemT2],
height: int
owner: UnionFindQueue[_NameT2, _ElemT2],
data: _ElemT2,
prio: float
) -> None:
"""Initialize a new node."""
self.owner: Optional[ConcatenableQueue[_NameT2, _ElemT2]] = None
self.min_node = min_node
self.height = height
self.parent: Optional[ConcatenableQueue.BaseNode[_NameT2,
_ElemT2]]
self.parent = None
self.childs: list[ConcatenableQueue.BaseNode[_NameT2, _ElemT2]]
self.childs = []
class Node(BaseNode[_NameT2, _ElemT2]):
"""Leaf node in the 2-3 tree, representing an element in the queue."""
__slots__ = ("data", "prio")
def __init__(self, data: _ElemT2, prio: float) -> None:
"""Initialize a new leaf node.
"""Initialize a new element.
This method should not be called directly.
Instead, call ConcatenableQueue.insert().
Instead, call UnionFindQueue.insert().
"""
super().__init__(min_node=self, height=0)
self.owner: Optional[UnionFindQueue[_NameT2, _ElemT2]] = owner
self.data = data
self.prio = prio
self.min_node = self
self.height = 1
self.parent: Optional[UnionFindQueue.Node[_NameT2, _ElemT2]]
self.left: Optional[UnionFindQueue.Node[_NameT2, _ElemT2]]
self.right: Optional[UnionFindQueue.Node[_NameT2, _ElemT2]]
self.parent = None
self.left = None
self.right = None
def find(self) -> _NameT2:
"""Return the name of the queue that contains this element.
This function takes time O(log(n)).
"""
node: ConcatenableQueue.BaseNode[_NameT2, _ElemT2] = self
node = self
while node.parent is not None:
node = node.parent
assert node.owner is not None
return node.owner.name
def set_prio(self, prio: float) -> None:
"""Change the priority of this element.
This function takes time O(log(n)).
"""
"""Change the priority of this element."""
self.prio = prio
node = self.parent
while node is not None:
min_node = node.childs[0].min_node
for child in node.childs[1:]:
if child.min_node.prio < min_node.prio:
min_node = child.min_node
node = self
while True:
min_node = node
if node.left is not None:
left_min_node = node.left.min_node
if left_min_node.prio < min_node.prio:
min_node = left_min_node
if node.right is not None:
right_min_node = node.right.min_node
if right_min_node.prio < min_node.prio:
min_node = right_min_node
node.min_node = min_node
if node.parent is None:
break
node = node.parent
def __init__(self, name: _NameT) -> None:
@ -102,10 +100,9 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
name: Name to assign to the new queue.
"""
self.name = name
self.tree: Optional[ConcatenableQueue.BaseNode[_NameT, _ElemT]] = None
self.first_node: Optional[ConcatenableQueue.Node[_NameT, _ElemT]]
self.first_node = None
self.sub_queues: list[ConcatenableQueue[_NameT, _ElemT]] = []
self.tree: Optional[UnionFindQueue.Node[_NameT, _ElemT]] = None
self.sub_queues: list[UnionFindQueue[_NameT, _ElemT]] = []
self.split_nodes: list[UnionFindQueue.Node[_NameT, _ElemT]] = []
def clear(self) -> None:
"""Remove all elements from the queue.
@ -114,8 +111,8 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
"""
node = self.tree
self.tree = None
self.first_node = None
self.sub_queues = []
self.split_nodes.clear()
# Wipe pointers to enable refcounted garbage collection.
if node is not None:
@ -123,8 +120,12 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
while node is not None:
node.min_node = None # type: ignore
prev_node = node
if node.childs:
node = node.childs.pop()
if node.left is not None:
node = node.left
prev_node.left = None
elif node.right is not None:
node = node.right
prev_node.right = None
else:
node = node.parent
prev_node.parent = None
@ -142,9 +143,7 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
prio: Initial priority of the new element.
"""
assert self.tree is None
self.tree = ConcatenableQueue.Node(elem, prio)
self.tree.owner = self
self.first_node = self.tree
self.tree = UnionFindQueue.Node(self, elem, prio)
return self.tree
def min_prio(self) -> float:
@ -168,7 +167,7 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
return node.min_node.data
def merge(self,
sub_queues: list[ConcatenableQueue[_NameT, _ElemT]]
sub_queues: list[UnionFindQueue[_NameT, _ElemT]]
) -> None:
"""Merge the specified queues.
@ -185,6 +184,7 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
"""
assert self.tree is None
assert not self.sub_queues
assert not self.split_nodes
assert sub_queues
# Keep the list of sub-queues.
@ -193,7 +193,6 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
# Move the root node from the first sub-queue to this queue.
# Clear its owner pointer.
self.tree = sub_queues[0].tree
self.first_node = sub_queues[0].first_node
assert self.tree is not None
sub_queues[0].tree = None
self.tree.owner = None
@ -209,7 +208,10 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
subtree.owner = None
# Merge our current tree with the tree from the sub-queue.
self.tree = self._join(self.tree, subtree)
(self.tree, split_node) = self._merge_tree(self.tree, subtree)
# Keep track of the left-most node from the sub-queue.
self.split_nodes.append(split_node)
# Put the owner pointer in the root node.
self.tree.owner = self
@ -232,10 +234,10 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
self.tree.owner = None
# Split the tree to reconstruct each sub-queue.
for sub in self.sub_queues[:0:-1]:
for (sub, split_node) in zip(self.sub_queues[:0:-1],
self.split_nodes[::-1]):
assert sub.first_node is not None
(tree, rtree) = self._split_tree(sub.first_node)
(tree, rtree) = self._split_tree(split_node)
# Assign the right tree to the sub-queue.
sub.tree = rtree
@ -247,184 +249,423 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
# Make this queue empty.
self.tree = None
self.first_node = None
self.sub_queues = []
self.split_nodes.clear()
@staticmethod
def _repair_node(node: BaseNode[_NameT, _ElemT]) -> None:
"""Repair min_prio attribute of an internal node."""
min_node = node.childs[0].min_node
for child in node.childs[1:]:
if child.min_node.prio < min_node.prio:
min_node = child.min_node
def _repair_node(node: Node[_NameT, _ElemT]) -> None:
"""Recalculate the height and min-priority information of the
specified node.
"""
# Repair node height.
lh = 0 if node.left is None else node.left.height
rh = 0 if node.right is None else node.right.height
node.height = 1 + max(lh, rh)
# Repair min-priority.
min_node = node
if node.left is not None:
left_min_node = node.left.min_node
if left_min_node.prio < min_node.prio:
min_node = left_min_node
if node.right is not None:
right_min_node = node.right.min_node
if right_min_node.prio < min_node.prio:
min_node = right_min_node
node.min_node = min_node
@staticmethod
def _new_internal_node(ltree: BaseNode[_NameT, _ElemT],
rtree: BaseNode[_NameT, _ElemT]
) -> BaseNode[_NameT, _ElemT]:
"""Create a new internal node with 2 child nodes."""
assert ltree.height == rtree.height
height = ltree.height + 1
if ltree.min_node.prio <= rtree.min_node.prio:
min_node = ltree.min_node
else:
min_node = rtree.min_node
node = ConcatenableQueue.BaseNode(min_node, height)
node.childs = [ltree, rtree]
ltree.parent = node
rtree.parent = node
def _rotate_left(self,
node: Node[_NameT, _ElemT]
) -> Node[_NameT, _ElemT]:
"""Rotate the specified subtree to the left.
Return the new root node of the subtree.
"""
#
# N C
# / \ / \
# A C ---> N D
# / \ / \
# B D A B
#
parent = node.parent
new_top = node.right
assert new_top is not None
node.right = new_top.left
if node.right is not None:
node.right.parent = node
new_top.left = node
new_top.parent = parent
node.parent = new_top
if parent is not None:
if parent.left is node:
parent.left = new_top
elif parent.right is node:
parent.right = new_top
self._repair_node(node)
self._repair_node(new_top)
return new_top
def _rotate_right(self,
node: Node[_NameT, _ElemT]
) -> Node[_NameT, _ElemT]:
"""Rotate the specified node to the right.
Return the new root node of the subtree.
"""
#
# N A
# / \ / \
# A D ---> B N
# / \ / \
# B C C D
#
parent = node.parent
new_top = node.left
assert new_top is not None
node.left = new_top.right
if node.left is not None:
node.left.parent = node
new_top.right = node
new_top.parent = parent
node.parent = new_top
if parent is not None:
if parent.left is node:
parent.left = new_top
elif parent.right is node:
parent.right = new_top
self._repair_node(node)
self._repair_node(new_top)
return new_top
def _rebalance_up(self,
node: Node[_NameT, _ElemT]
) -> Node[_NameT, _ElemT]:
"""Repair and rebalance the specified node and its ancestors.
Return the root node of the rebalanced tree.
"""
# Walk up to the root of the tree.
while True:
lh = 0 if node.left is None else node.left.height
rh = 0 if node.right is None else node.right.height
if lh > rh + 1:
# This node is left-heavy. Rotate right to rebalance.
#
# N L
# / \ / \
# L \ / N
# / \ \ ---> / / \
# A B \ A B \
# \ \
# R R
#
lchild = node.left
assert lchild is not None
if ((lchild.right is not None)
and ((lchild.left is None)
or (lchild.right.height > lchild.left.height))):
# Double rotation.
lchild = self._rotate_left(lchild)
node = self._rotate_right(node)
elif lh + 1 < rh:
# This node is right-heavy. Rotate left to rebalance.
#
# N R
# / \ / \
# / R N \
# / / \ ---> / \ \
# / A B / A B
# / /
# L L
#
rchild = node.right
assert rchild is not None
if ((rchild.left is not None)
and ((rchild.right is None)
or (rchild.left.height > rchild.right.height))):
# Double rotation.
rchild = self._rotate_right(rchild)
node = self._rotate_left(node)
else:
# No rotation. Must still repair node though.
self._repair_node(node)
if node.parent is None:
break
# Continue rebalancing at the parent.
node = node.parent
# Return new root node.
return node
def _join_right(self,
ltree: BaseNode[_NameT, _ElemT],
rtree: BaseNode[_NameT, _ElemT]
) -> BaseNode[_NameT, _ElemT]:
"""Join two trees together.
ltree: Node[_NameT, _ElemT],
node: Node[_NameT, _ElemT],
rtree: Optional[Node[_NameT, _ElemT]]
) -> Node[_NameT, _ElemT]:
"""Join a left subtree, middle node and right subtree together.
The initial left subtree must be higher than the right subtree.
The left subtree must be higher than the right subtree.
Return the root node of the joined tree.
"""
lh = ltree.height
rh = 0 if rtree is None else rtree.height
assert lh > rh + 1
# Descend down the right spine of the left tree until we
# reach a node just above the right tree.
node = ltree
while node.height > rtree.height + 1:
node = node.childs[-1]
# Descend down the right spine of "ltree".
# Stop at a node with compatible height, then insert "node"
# and attach "rtree".
#
# ltree
# / \
# X
# / \
# X <-- cur
# / \
# node
# / \
# X rtree
#
assert node.height == rtree.height + 1
# Descend to a point with compatible height.
cur = ltree
while (cur.right is not None) and (cur.right.height > rh + 1):
cur = cur.right
# Find a node in the left tree to insert the right tree as child.
while len(node.childs) == 3:
# This node already has 3 childs so we can not add the right tree.
# Rearrange into 2 nodes with 2 childs each, then solve it
# at the parent level.
#
# N N R'
# / | \ / \ / \
# / | \ ---> / \ / \
# A B C R A B C R
#
child = node.childs.pop()
# Insert "node" and "rtree".
node.left = cur.right
node.right = rtree
if node.left is not None:
node.left.parent = node
if rtree is not None:
rtree.parent = node
cur.right = node
node.parent = cur
# A double rotation may be necessary.
if (cur.left is None) or (cur.left.height <= rh):
node = self._rotate_right(node)
cur = self._rotate_left(cur)
else:
self._repair_node(node)
rtree = self._new_internal_node(child, rtree)
if node.parent is None:
# Create a new root node.
return self._new_internal_node(node, rtree)
node = node.parent
self._repair_node(cur)
# Insert the right tree as child of this node.
assert len(node.childs) < 3
node.childs.append(rtree)
rtree.parent = node
# Ascend from "cur" to the root of the tree.
# Repair and/or rotate as needed.
while cur.parent is not None:
cur = cur.parent
assert cur.left is not None
assert cur.right is not None
# Repair min-prio pointers of ancestors.
while True:
self._repair_node(node)
if node.parent is None:
break
node = node.parent
if cur.left.height + 1 < cur.right.height:
cur = self._rotate_left(cur)
else:
self._repair_node(cur)
return node
return cur
def _join_left(self,
ltree: BaseNode[_NameT, _ElemT],
rtree: BaseNode[_NameT, _ElemT]
) -> BaseNode[_NameT, _ElemT]:
"""Join two trees together.
ltree: Optional[Node[_NameT, _ElemT]],
node: Node[_NameT, _ElemT],
rtree: Node[_NameT, _ElemT]
) -> Node[_NameT, _ElemT]:
"""Join a left subtree, middle node and right subtree together.
The initial left subtree must be lower than the right subtree.
The right subtree must be higher than the left subtree.
Return the root node of the joined tree.
"""
lh = 0 if ltree is None else ltree.height
rh = rtree.height
assert lh + 1 < rh
# Descend down the left spine of the right tree until we
# reach a node just above the left tree.
node = rtree
while node.height > ltree.height + 1:
node = node.childs[0]
# Descend down the left spine of "rtree".
# Stop at a node with compatible height, then insert "node"
# and attach "ltree".
#
# rtree
# / \
# X
# / \
# cur --> X
# / \
# node
# / \
# ltree X
#
assert node.height == ltree.height + 1
# Descend to a point with compatible height.
cur = rtree
while (cur.left is not None) and (cur.left.height > lh + 1):
cur = cur.left
# Find a node in the right tree to insert the left tree as child.
while len(node.childs) == 3:
# This node already has 3 childs so we can not add the left tree.
# Rearrange into 2 nodes with 2 childs each, then solve it
# at the parent level.
#
# N L' N
# / | \ / \ / \
# / | \ ---> / \ / \
# L A B C L A B C
#
child = node.childs.pop(0)
# Insert "node" and "ltree".
node.left = ltree
node.right = cur.left
if ltree is not None:
ltree.parent = node
if node.right is not None:
node.right.parent = node
cur.left = node
node.parent = cur
# A double rotation may be necessary.
if (cur.right is None) or (cur.right.height <= lh):
node = self._rotate_left(node)
cur = self._rotate_right(cur)
else:
self._repair_node(node)
ltree = self._new_internal_node(ltree, child)
if node.parent is None:
# Create a new root node.
return self._new_internal_node(ltree, node)
node = node.parent
self._repair_node(cur)
# Insert the left tree as child of this node.
assert len(node.childs) < 3
node.childs.insert(0, ltree)
ltree.parent = node
# Ascend from "cur" to the root of the tree.
# Repair and/or rotate as needed.
while cur.parent is not None:
cur = cur.parent
assert cur.left is not None
assert cur.right is not None
# Repair min-prio pointers of ancestors.
while True:
self._repair_node(node)
if node.parent is None:
break
node = node.parent
if cur.left.height > cur.right.height + 1:
cur = self._rotate_right(cur)
else:
self._repair_node(cur)
return node
return cur
def _join(self,
ltree: BaseNode[_NameT, _ElemT],
rtree: BaseNode[_NameT, _ElemT]
) -> BaseNode[_NameT, _ElemT]:
"""Join two trees together.
ltree: Optional[Node[_NameT, _ElemT]],
node: Node[_NameT, _ElemT],
rtree: Optional[Node[_NameT, _ElemT]]
) -> Node[_NameT, _ElemT]:
"""Join a left subtree, middle node and right subtree together.
The left and right subtree must be consistent 2-3 trees.
Initial parent pointers of these subtrees are ignored.
The left or right subtree may initially be a child of the middle
node; such links will be broken as needed.
The left and right subtrees must be consistent, AVL-balanced trees.
Parent pointers of the subtrees are ignored.
The middle node is considered as a single node.
Its parent and child pointers are ignored.
Return the root node of the joined tree.
"""
if ltree.height > rtree.height:
return self._join_right(ltree, rtree)
elif ltree.height < rtree.height:
return self._join_left(ltree, rtree)
lh = 0 if ltree is None else ltree.height
rh = 0 if rtree is None else rtree.height
if lh > rh + 1:
assert ltree is not None
ltree.parent = None
return self._join_right(ltree, node, rtree)
elif lh + 1 < rh:
assert rtree is not None
rtree.parent = None
return self._join_left(ltree, node, rtree)
else:
return self._new_internal_node(ltree, rtree)
# Subtree heights are compatible. Just join them.
#
# node
# / \
# ltree rtree
# / \ / \
#
node.parent = None
node.left = ltree
if ltree is not None:
ltree.parent = node
node.right = rtree
if rtree is not None:
rtree.parent = node
self._repair_node(node)
return node
def _merge_tree(self,
ltree: Node[_NameT, _ElemT],
rtree: Node[_NameT, _ElemT]
) -> tuple[Node[_NameT, _ElemT], Node[_NameT, _ElemT]]:
"""Merge two trees.
Return a tuple (split_node, merged_tree).
"""
# Find the left-most node of the right tree.
split_node = rtree
while split_node.left is not None:
split_node = split_node.left
# Delete the split_node from its tree.
parent = split_node.parent
if split_node.right is not None:
split_node.right.parent = parent
if parent is None:
rtree_new = split_node.right
else:
# Repair and rebalance the ancestors of split_node.
parent.left = split_node.right
rtree_new = self._rebalance_up(parent)
# Join the two trees via the split_node.
merged_tree = self._join(ltree, split_node, rtree_new)
return (merged_tree, split_node)
def _split_tree(self,
split_node: BaseNode[_NameT, _ElemT]
) -> tuple[BaseNode[_NameT, _ElemT],
BaseNode[_NameT, _ElemT]]:
split_node: Node[_NameT, _ElemT]
) -> tuple[Node[_NameT, _ElemT], Node[_NameT, _ElemT]]:
"""Split a tree on a specified node.
Two new trees will be constructed.
Leaf nodes to the left of "split_node" will go to the left tree.
Leaf nodes to the right of "split_node", and "split_node" itself,
All nodes to the left of "split_node" will go to the left tree.
All nodes to the right of "split_node", and "split_node" itself,
will go to the right tree.
Return tuple (ltree, rtree).
Return tuple (ltree, rtree),
where ltree contains all nodes left of the split-node,
rtree contains the split-nodes and all nodes to its right.
"""
# Detach "split_node" from its parent.
# Assign it to the right tree.
# Assign the descendants of "split_node" to the appropriate trees
# and detach them from "split_node".
ltree = split_node.left
rtree = split_node.right
split_node.left = None
split_node.right = None
if ltree is not None:
ltree.parent = None
if rtree is not None:
rtree.parent = None
# Detach "split_node" from its parent (if any).
parent = split_node.parent
split_node.parent = None
# The left tree is initially empty.
# The right tree initially contains only "split_node".
ltree: Optional[ConcatenableQueue.BaseNode[_NameT, _ElemT]] = None
rtree = split_node
# Assign "split_node" to the right tree.
rtree = self._join(None, split_node, rtree)
# Walk up to the root of the tree.
# On the way up, detach each node from its parent and join its
# child nodes to the appropriate tree.
# On the way up, detach each node from its parent and join it,
# and its descendants, to the appropriate tree.
node = split_node
while parent is not None:
@ -436,53 +677,16 @@ class ConcatenableQueue(Generic[_NameT, _ElemT]):
# Detach "node" from its parent.
node.parent = None
if len(node.childs) == 3:
if node.childs[0] is child:
# "node" has 3 child nodes.
# Its left subtree has already been split.
# Turn it into a 2-node and join it to the right tree.
node.childs.pop(0)
self._repair_node(node)
rtree = self._join(rtree, node)
elif node.childs[2] is child:
# "node" has 3 child nodes.
# Its right subtree has already been split.
# Turn it into a 2-node and join it to the left tree.
node.childs.pop()
self._repair_node(node)
if ltree is None:
ltree = node
else:
ltree = self._join(node, ltree)
else:
# "node has 3 child nodes.
# Its middle subtree has already been split.
# Join its left child to the left tree, and its right
# child to the right tree, then delete "node".
node.childs[0].parent = None
node.childs[2].parent = None
if ltree is None:
ltree = node.childs[0]
else:
ltree = self._join(node.childs[0], ltree)
rtree = self._join(rtree, node.childs[2])
elif node.childs[0] is child:
# "node" has 2 child nodes.
# Its left subtree has already been split.
# Join its right child to the right tree, then delete "node".
node.childs[1].parent = None
rtree = self._join(rtree, node.childs[1])
if node.left is child:
# "split_node" was located in the left subtree of "node".
# This implies that "node" must be joined to the right tree.
rtree = self._join(rtree, node, node.right)
else:
# "node" has 2 child nodes.
# Its right subtree has already been split.
# Join its left child to the left tree, then delete "node".
node.childs[0].parent = None
if ltree is None:
ltree = node.childs[0]
else:
ltree = self._join(node.childs[0], ltree)
# "split_node" was located in the right subtree of "node".
# This implies that "node" must be joined to the right tree.
assert node.right is child
ltree = self._join(node.left, node, ltree)
assert ltree is not None
return (ltree, rtree)

View File

@ -574,60 +574,6 @@ class TestVerificationFail(unittest.TestCase):
verify_optimum(ctx)
class TestValueRange(unittest.TestCase):
"""Test graphs that force big values for dual variables or edge slack."""
def test_big_blossom_dual(self):
"""Force modified blossom dual close to 2*maxweight."""
#
# [3]
# / \
# W-4/ \W
# 7 / \
# [1]-----[2]-----[4]
# | W-4
# 5|
# | 1
# [5]-----[6]
#
w = 100000
pairs = mwm([(1,2,7), (2,3,w-4), (2,4,w-4), (2,5,5), (3,4,w), (5,6,1)])
self.assertEqual(pairs, [(1,2), (3,4), (5,6)])
def test_negative_blossom_dual(self):
"""Force modified blossom dual close to -maxweight."""
#
# [3]
# / \
# 5/ \7
# 1 / \
# [1]-----[2]-----[4]
# | 5
# 1|
# | W
# [5]-----[6]
#
w = 100000
pairs = mwm([(1,2,1), (2,3,5), (2,4,5), (2,5,1), (3,4,7), (5,6,w)])
self.assertEqual(pairs, [(1,2), (3,4), (5,6)])
def test_big_edge_slack(self):
"""Force modified edge slack close to 3*maxweight."""
#
# 6 W W-2 5
# [1]-----[2]-----[3]-----[4]-----[5]
# | |
# |1 |1
# | |
# [6]-----[7]-----[8]-----[9]-----[10]
# 6 W W-2 5
#
w = 100000
pairs = mwm([(1,2,6), (1,6,1), (2,3,w), (3,4,w-2), (3,8,1), (4,5,5),
(6,7,6), (7,8,w), (8,9,w-2), (9,10,5)])
self.assertEqual(pairs, [(1,6), (2,3), (4,5), (7,8), (9,10)])
if __name__ == "__main__":
unittest.main()

View File

@ -3,11 +3,11 @@
import random
import unittest
from mwmatching.datastruct import ConcatenableQueue, PriorityQueue
from mwmatching.datastruct import UnionFindQueue, PriorityQueue
class TestConcatenableQueue(unittest.TestCase):
"""Test ConcatenableQueue."""
class TestUnionFindQueue(unittest.TestCase):
"""Test UnionFindQueue."""
def _check_tree(self, queue):
"""Check tree balancing rules and priority info."""
@ -20,33 +20,40 @@ class TestConcatenableQueue(unittest.TestCase):
node = nodes.pop()
if node.left is not None:
self.assertIs(node.left.parent, node)
nodes.append(node.left)
if node.right is not None:
self.assertIs(node.right.parent, node)
nodes.append(node.right)
if node is not queue.tree:
self.assertIsNone(node.owner)
if node.height == 0:
self.assertEqual(len(node.childs), 0)
self.assertIs(node.min_node, node)
else:
self.assertIn(len(node.childs), (2, 3))
best_node = set()
best_prio = None
for child in node.childs:
self.assertIs(child.parent, node)
self.assertEqual(child.height, node.height - 1)
nodes.append(child)
if ((best_prio is None)
or (child.min_node.prio < best_prio)):
best_node = {child.min_node}
lh = 0 if node.left is None else node.left.height
rh = 0 if node.right is None else node.right.height
self.assertEqual(node.height, 1 + max(lh, rh))
self.assertLessEqual(lh, rh + 1)
self.assertLessEqual(rh, lh + 1)
best_node = {node}
best_prio = node.prio
for child in (node.left, node.right):
if child is not None:
if child.min_node.prio < best_prio:
best_prio = child.min_node.prio
best_node = {child.min_node}
elif child.min_node.prio == best_prio:
best_node.add(child.min_node)
self.assertEqual(node.min_node.prio, best_prio)
self.assertIn(node.min_node, best_node)
self.assertEqual(node.min_node.prio, best_prio)
self.assertIn(node.min_node, best_node)
def test_single(self):
"""Single element."""
q = ConcatenableQueue("Q")
q = UnionFindQueue("Q")
with self.assertRaises(Exception):
q.min_prio()
@ -55,7 +62,7 @@ class TestConcatenableQueue(unittest.TestCase):
q.min_elem()
n = q.insert("a", 4)
self.assertIsInstance(n, ConcatenableQueue.Node)
self.assertIsInstance(n, UnionFindQueue.Node)
self._check_tree(q)
@ -77,22 +84,22 @@ class TestConcatenableQueue(unittest.TestCase):
def test_simple(self):
"""Simple test, 5 elements."""
q1 = ConcatenableQueue("A")
q1 = UnionFindQueue("A")
n1 = q1.insert("a", 5)
q2 = ConcatenableQueue("B")
q2 = UnionFindQueue("B")
n2 = q2.insert("b", 6)
q3 = ConcatenableQueue("C")
q3 = UnionFindQueue("C")
n3 = q3.insert("c", 7)
q4 = ConcatenableQueue("D")
q4 = UnionFindQueue("D")
n4 = q4.insert("d", 4)
q5 = ConcatenableQueue("E")
q5 = UnionFindQueue("E")
n5 = q5.insert("e", 3)
q345 = ConcatenableQueue("P")
q345 = UnionFindQueue("P")
q345.merge([q3, q4, q5])
self._check_tree(q345)
@ -113,7 +120,7 @@ class TestConcatenableQueue(unittest.TestCase):
self.assertEqual(q345.min_prio(), 4)
self.assertEqual(q345.min_elem(), "d")
q12 = ConcatenableQueue("Q")
q12 = UnionFindQueue("Q")
q12.merge([q1, q2])
self._check_tree(q12)
@ -122,7 +129,7 @@ class TestConcatenableQueue(unittest.TestCase):
self.assertEqual(q12.min_prio(), 5)
self.assertEqual(q12.min_elem(), "a")
q12345 = ConcatenableQueue("R")
q12345 = UnionFindQueue("R")
q12345.merge([q12, q345])
self._check_tree(q12345)
@ -194,30 +201,30 @@ class TestConcatenableQueue(unittest.TestCase):
queues = []
nodes = []
for i in range(14):
q = ConcatenableQueue(chr(ord("A") + i))
q = UnionFindQueue(chr(ord("A") + i))
n = q.insert(chr(ord("a") + i), prios[i])
queues.append(q)
nodes.append(n)
q = ConcatenableQueue("AB")
q = UnionFindQueue("AB")
q.merge(queues[0:2])
queues.append(q)
self._check_tree(q)
self.assertEqual(q.min_prio(), min(prios[0:2]))
q = ConcatenableQueue("CDE")
q = UnionFindQueue("CDE")
q.merge(queues[2:5])
queues.append(q)
self._check_tree(q)
self.assertEqual(q.min_prio(), min(prios[2:5]))
q = ConcatenableQueue("FGHI")
q = UnionFindQueue("FGHI")
q.merge(queues[5:9])
queues.append(q)
self._check_tree(q)
self.assertEqual(q.min_prio(), min(prios[5:9]))
q = ConcatenableQueue("JKLMN")
q = UnionFindQueue("JKLMN")
q.merge(queues[9:14])
queues.append(q)
self._check_tree(q)
@ -232,7 +239,7 @@ class TestConcatenableQueue(unittest.TestCase):
for i in range(9, 14):
self.assertEqual(nodes[i].find(), "JKLMN")
q = ConcatenableQueue("ALL")
q = UnionFindQueue("ALL")
q.merge(queues[14:18])
queues.append(q)
self._check_tree(q)
@ -291,7 +298,7 @@ class TestConcatenableQueue(unittest.TestCase):
for i in range(4000):
name = f"q{i}"
q = ConcatenableQueue(name)
q = UnionFindQueue(name)
p = rng.random()
n = q.insert(f"n{i}", p)
nodes.append(n)
@ -319,7 +326,7 @@ class TestConcatenableQueue(unittest.TestCase):
subs = rng.sample(sorted(live_queues), k)
name = f"Q{i}"
q = ConcatenableQueue(name)
q = UnionFindQueue(name)
q.merge([queues[nn] for nn in subs])
self._check_tree(q)
queues[name] = q