Compare commits
76 Commits
77dac28056
...
8eb6b77523
Author | SHA1 | Date |
---|---|---|
Joris van Rantwijk | 8eb6b77523 | |
Joris van Rantwijk | 43502512ee | |
Joris van Rantwijk | d4e86e9dcc | |
Joris van Rantwijk | 53e5ec618e | |
Joris van Rantwijk | 3914ea15a7 | |
Joris van Rantwijk | d539e5cfbb | |
Joris van Rantwijk | f652a08d43 | |
Joris van Rantwijk | 0f18b7b05a | |
Joris van Rantwijk | 087799cdca | |
Joris van Rantwijk | 02917b2caf | |
Joris van Rantwijk | f0773eb84b | |
Joris van Rantwijk | 1e6f2a11c4 | |
Joris van Rantwijk | b5ccbdeda4 | |
Joris van Rantwijk | 3815335a9f | |
Joris van Rantwijk | 082397ef80 | |
Joris van Rantwijk | ab691813b3 | |
Joris van Rantwijk | 105679c986 | |
Joris van Rantwijk | 7683f891d5 | |
Joris van Rantwijk | 22251e64e8 | |
Joris van Rantwijk | 5b5c107a5c | |
Joris van Rantwijk | e8020f3e58 | |
Joris van Rantwijk | 2271df1897 | |
Joris van Rantwijk | 228da75495 | |
Joris van Rantwijk | 5500750c13 | |
Joris van Rantwijk | 39eaea451e | |
Joris van Rantwijk | 7ea1562cc7 | |
Joris van Rantwijk | 55a98238aa | |
Joris van Rantwijk | b17ca1a364 | |
Joris van Rantwijk | 67ca294840 | |
Joris van Rantwijk | efb238ff8e | |
Joris van Rantwijk | e103a493fc | |
Joris van Rantwijk | 4670cf1dca | |
Joris van Rantwijk | c731c32473 | |
Joris van Rantwijk | e8490010d6 | |
Joris van Rantwijk | dc8cdae225 | |
Joris van Rantwijk | e2f5b63a01 | |
Joris van Rantwijk | 6bf04df77b | |
Joris van Rantwijk | 99f8a2d822 | |
Joris van Rantwijk | ed70402310 | |
Joris van Rantwijk | 54f59db753 | |
Joris van Rantwijk | c58374e6fb | |
Joris van Rantwijk | 658a393bb8 | |
Joris van Rantwijk | c19fa9a76c | |
Joris van Rantwijk | d3475834ab | |
Joris van Rantwijk | 147640329f | |
Joris van Rantwijk | f2e8ca1357 | |
Joris van Rantwijk | b2d4de41f9 | |
Joris van Rantwijk | d2debb6d6f | |
Joris van Rantwijk | bbe19a6798 | |
Joris van Rantwijk | 50ef772271 | |
Joris van Rantwijk | 6a75ffaf63 | |
Joris van Rantwijk | 4c6115fb2f | |
Joris van Rantwijk | 0e76e6472b | |
Joris van Rantwijk | f35a640e43 | |
Joris van Rantwijk | b960a85b6c | |
Joris van Rantwijk | f8c6b99842 | |
Joris van Rantwijk | 1a98624f2b | |
Joris van Rantwijk | 61524990d7 | |
Joris van Rantwijk | 73641d7b70 | |
Joris van Rantwijk | 0675230692 | |
Joris van Rantwijk | de30ac3c5e | |
Joris van Rantwijk | aab2acd78e | |
Joris van Rantwijk | e9baa88c70 | |
Joris van Rantwijk | 73479532ac | |
Joris van Rantwijk | 04b6908449 | |
Joris van Rantwijk | 3a77749425 | |
Joris van Rantwijk | 9ee26584ab | |
Joris van Rantwijk | 225311dae0 | |
Joris van Rantwijk | 7cc1666cf2 | |
Joris van Rantwijk | 6318de3b1f | |
Joris van Rantwijk | b2e055b357 | |
Joris van Rantwijk | de03796d99 | |
Joris van Rantwijk | a23c38eb70 | |
Joris van Rantwijk | 13b6b76d47 | |
Joris van Rantwijk | 91d4afb271 | |
Joris van Rantwijk | c8a3f7f684 |
|
@ -2,6 +2,8 @@ __pycache__/
|
|||
.coverage
|
||||
|
||||
cpp/test_mwmatching
|
||||
cpp/test_concatenable_queue
|
||||
cpp/test_priority_queue
|
||||
cpp/run_matching
|
||||
cpp/run_matching_dbg
|
||||
tests/lemon/lemon_matching
|
||||
|
|
76
README.md
76
README.md
|
@ -1,43 +1,74 @@
|
|||
# Maximum Weighted Matching
|
||||
|
||||
This repository contains a Python 3 implementation of maximum weighted matching in general graphs.
|
||||
This repository contains implementations of maximum weighted matching for general graphs.
|
||||
|
||||
In graph theory, a _matching_ is a subset of edges that does not use any vertex more than once.
|
||||
For an edge-weighted graph, a _maximum weight matching_ is a matching that achieves
|
||||
the largest possible sum of weights of matched edges.
|
||||
|
||||
The code in this repository is based on a variant of the blossom algorithm that runs in
|
||||
_O(n<sup>3</sup>)_ steps.
|
||||
_O(n m log n)_ steps.
|
||||
See the file [Algorithm.md](doc/Algorithm.md) for a detailed description.
|
||||
|
||||
You may find this repository useful if ...
|
||||
|
||||
- you want a stand-alone, pure Python module that calculates maximum weight matchings
|
||||
with reasonable efficiency;
|
||||
- or you want to play around with a maximum weight matching algorithm to learn how it works.
|
||||
## Python
|
||||
|
||||
This repository is probably not the best place if ...
|
||||
The folder [python/](python/) contains a Python 3 package `mwmatching` that implements
|
||||
maximum weighted matching.
|
||||
The Python code is self-contained -- it has no dependencies outside the standard library.
|
||||
|
||||
- you need a very fast routine that quickly matches large graphs (more than ~ 1000 vertices).
|
||||
In that case I recommend [LEMON](http://lemon.cs.elte.hu/trac/lemon),
|
||||
a C++ library that provides a blazingly fast implementation.
|
||||
- or you want a high quality Python package that provides abstract graph data structures
|
||||
and graph algorithms.
|
||||
In that case I recommend [NetworkX](https://networkx.org/).
|
||||
- or you are only interested in bipartite graphs.
|
||||
There are simpler and faster algorithms for matching bipartite graphs.
|
||||
To use the package, set your `PYTHONPATH` to the location of the package `mwmatching`.
|
||||
Alternatively, you can install the package into your Python environment as follows:
|
||||
|
||||
```bash
|
||||
cd python
|
||||
pip install .
|
||||
```
|
||||
|
||||
Using the package is easy.
|
||||
You describe the input graph by listing its edges.
|
||||
Each edge is represented as a tuple of vertex indices and the weight of the edge.
|
||||
The example below finds a matching in a graph with 5 vertices and 5 edges.
|
||||
The maximum weight matching contains two edges and has total weight 11.
|
||||
|
||||
```python
|
||||
from mwmatching import maximum_weight_matching
|
||||
edges = [(0, 1, 3), (1, 2, 8), (1, 4, 6), (2, 3, 5), (2, 4, 7)]
|
||||
matching = maximum_weight_matching(edges)
|
||||
print(matching) # prints [(1, 4), (2, 3)]
|
||||
```
|
||||
|
||||
## C++
|
||||
|
||||
The folder [cpp/](cpp/) contains a header-only C++ implementation of maximum weighted matching.
|
||||
|
||||
The C++ code is self-contained and can easily be linked into an application.
|
||||
It is also reasonably efficient.
|
||||
|
||||
For serious use cases, [LEMON](https://lemon.cs.elte.hu/trac/lemon) may be a better choice.
|
||||
LEMON is a C++ library that provides a very fast and robust implementation of
|
||||
maximum weighted matching and many other graph algorithms.
|
||||
To my knowledge, it is the only free software library that provides a high-quality
|
||||
matching algorithm.
|
||||
|
||||
|
||||
## Repository structure
|
||||
|
||||
```
|
||||
python/
|
||||
mwmatching.py : Python implementation of maximum weight matching
|
||||
test_mwmatching.py : Unit tests
|
||||
mwmatching/ : Python package for maximum weight matching
|
||||
__init__.py
|
||||
algorithm.py : Algorithm implementation
|
||||
datastruct.py : Internal data structures
|
||||
tests/
|
||||
test_algorithm : Unit tests for the algorithm
|
||||
test_datastruct.py : Unit tests for data structures
|
||||
run_matching.py : Command-line program to run the matching algorithm
|
||||
|
||||
cpp/
|
||||
mwmatching.h : C++ implementation of maximum weight matching
|
||||
mwmatching.hpp : C++ implementation of maximum weight matching
|
||||
concatenable_queue.hpp : Data structure used by matching algorithm
|
||||
priority_queue.hpp : Data structure used by matching algorithm
|
||||
test_mwmatching.cpp : Unit tests
|
||||
run_matching.cpp : Command-line program to run the matching algorithm
|
||||
|
||||
|
@ -76,6 +107,11 @@ My implementation follows the description of the algorithm as it appears in the
|
|||
However, earlier versions of the algorithm were invented and improved by several other scientists.
|
||||
See the file [Algorithm.md](doc/Algorithm.md) for links to the most important papers.
|
||||
|
||||
I used some ideas from the source code of the `MaxWeightedMatching` class in
|
||||
[LEMON](https://lemon.cs.elto.hu/trac/lemon):
|
||||
the technique to implement lazy updates of vertex dual variables,
|
||||
and the approach to re-use alternating trees after augmenting the matching.
|
||||
|
||||
I used Fortran programs `hardcard.f`, `t.f` and `tt.f` by R. B. Mattingly and N. Ritchey
|
||||
to generate test graphs.
|
||||
These programs are part of the DIMACS Network Flows and Matching implementation challenge.
|
||||
|
@ -83,7 +119,7 @@ They can be found in the
|
|||
[DIMACS Netflow archive](http://archive.dimacs.rutgers.edu/pub/netflow/).
|
||||
|
||||
To check the correctness of my results, I used other maximum weight matching solvers:
|
||||
the `MaxWeightedMatching` module in [LEMON](http://lemon.cs.elte.hu/trac/lemon),
|
||||
the `MaxWeightedMatching` module in LEMON,
|
||||
and the program
|
||||
[`wmatch`](http://archive.dimacs.rutgers.edu/pub/netflow/matching/weighted/solver-1/)
|
||||
by Edward Rothberg.
|
||||
|
@ -94,7 +130,7 @@ by Edward Rothberg.
|
|||
The following license applies to the software in this repository, excluding the folder `doc`.
|
||||
This license is sometimes called the MIT License or the Expat License:
|
||||
|
||||
> Copyright (c) 2023 Joris van Rantwijk
|
||||
> Copyright (c) 2023-2024 Joris van Rantwijk
|
||||
>
|
||||
> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
>
|
||||
|
|
22
cpp/Makefile
22
cpp/Makefile
|
@ -6,22 +6,34 @@ CXXFLAGS = -std=c++11 -Wall -Wextra $(OPTFLAGS) $(DBGFLAGS)
|
|||
LIB_BOOST_TEST = -l:libboost_unit_test_framework.a
|
||||
|
||||
.PHONY: all
|
||||
all: run_matching run_matching_dbg test_mwmatching
|
||||
all: run_matching run_matching_dbg test_mwmatching test_concatenable_queue test_priority_queue
|
||||
|
||||
run_matching: run_matching.cpp mwmatching.h
|
||||
run_matching: run_matching.cpp mwmatching.hpp concatenable_queue.hpp priority_queue.hpp
|
||||
$(CXX) $(CPPFLAGS) $(CXXFLAGS) $(LDFLAGS) -o $@ $<
|
||||
|
||||
run_matching_dbg: OPTFLAGS = -Og
|
||||
run_matching_dbg: DBGFLAGS = -g -fsanitize=address -fsanitize=undefined
|
||||
run_matching_dbg: run_matching.cpp mwmatching.h
|
||||
run_matching_dbg: run_matching.cpp mwmatching.hpp concatenable_queue.hpp priority_queue.hpp
|
||||
$(CXX) $(CPPFLAGS) $(CXXFLAGS) $(LDFLAGS) -o $@ $<
|
||||
|
||||
test_mwmatching: OPTFLAGS = -O1
|
||||
test_mwmatching: DBGFLAGS = -fsanitize=address -fsanitize=undefined
|
||||
test_mwmatching: test_mwmatching.cpp mwmatching.h
|
||||
test_mwmatching: test_mwmatching.cpp mwmatching.hpp concatenable_queue.hpp priority_queue.hpp
|
||||
$(CXX) $(CPPFLAGS) $(CXXFLAGS) $(LDFLAGS) -o $@ $< $(LIB_BOOST_TEST)
|
||||
|
||||
test_concatenable_queue: OPTFLAGS = -O1
|
||||
test_concatenable_queue: DBGFLAGS = -fsanitize=address -fsanitize=undefined
|
||||
test_concatenable_queue: test_concatenable_queue.cpp concatenable_queue.hpp
|
||||
$(CXX) $(CPPFLAGS) $(CXXFLAGS) $(LDFLAGS) -o $@ $< $(LIB_BOOST_TEST)
|
||||
|
||||
test_priority_queue: OPTFLAGS = -O1
|
||||
test_priority_queue: DBGFLAGS = -fsanitize=address -fsanitize=undefined
|
||||
test_priority_queue: test_priority_queue.cpp priority_queue.hpp
|
||||
$(CXX) $(CPPFLAGS) $(CXXFLAGS) $(LDFLAGS) -o $@ $< $(LIB_BOOST_TEST)
|
||||
|
||||
.PHONY: clean
|
||||
clean:
|
||||
$(RM) run_matching run_matching_dbg test_mwmatching
|
||||
$(RM) run_matching run_matching_dbg
|
||||
$(RM) test_mwmatching
|
||||
$(RM) test_concatenable_queue test_priority_queue
|
||||
|
||||
|
|
|
@ -0,0 +1,841 @@
|
|||
/**
|
||||
* Concatenable queue data structure.
|
||||
*/
|
||||
|
||||
#ifndef MWMATCHING_CONCATENABLE_QUEUE_H_
|
||||
#define MWMATCHING_CONCATENABLE_QUEUE_H_
|
||||
|
||||
#include <algorithm>
|
||||
#include <cassert>
|
||||
#include <tuple>
|
||||
|
||||
|
||||
namespace mwmatching {
|
||||
|
||||
|
||||
/**
|
||||
* Priority queue supporting efficient merge and split operations.
|
||||
*
|
||||
* Conceptually, this is a combination of a disjoint set and a priority queue.
|
||||
* Each queue has a "name".
|
||||
* Each element has associated "data".
|
||||
* Each element has a priority.
|
||||
* Each element is contained in at most one queue at any time.
|
||||
*
|
||||
* The following operations can be done efficiently:
|
||||
* - Find the name of the queue that contains a given element.
|
||||
* - Change the priority of a given element.
|
||||
* - Find the element with lowest priority in a given queue.
|
||||
* - Merge two or more queues.
|
||||
* - Undo a previous merge step.
|
||||
*
|
||||
* A ConcatenableQueue instance may be destructed if it is empty and not
|
||||
* currently merged into a larger queue. Alternatively, a group of related
|
||||
* ConcatenableQueue instances and their Node instances may be destructed
|
||||
* together, even if non-empty. In this case, the objects may be destructed
|
||||
* in any order. No other interactions with the objects are allowed once
|
||||
* destruction has started.
|
||||
*
|
||||
* This data structure is implemented as an AVL tree, with minimum-priority
|
||||
* tracking added to it.
|
||||
* See also
|
||||
* https://en.wikipedia.org/wiki/Avl_tree
|
||||
* and
|
||||
* G. Blelloch, D. Ferizovic, Y. Sun, Parallel Ordered Sets Using Join,
|
||||
* https://arxiv.org/abs/1602.02120
|
||||
*/
|
||||
template <typename PrioType,
|
||||
typename NameType,
|
||||
typename DataType>
|
||||
class ConcatenableQueue
|
||||
{
|
||||
public:
|
||||
|
||||
/**
|
||||
* A Node instance represents an element in a concatenable queue.
|
||||
*
|
||||
* A Node instance must remain valid while it is contained in a queue.
|
||||
* The containing queue holds a pointer to the Node.
|
||||
*
|
||||
* A Node instance may be destructed if it is not contained in any queue.
|
||||
* Alternatively, a Node instance may be destructed just before or after
|
||||
* destructing the containing queue. In this case, no intermediate
|
||||
* interactions with the node or queue are allowed.
|
||||
*/
|
||||
class Node
|
||||
{
|
||||
public:
|
||||
/** Construct an unused node, not yet contained in any queue. */
|
||||
Node()
|
||||
: owner_(nullptr)
|
||||
, parent_(nullptr)
|
||||
, left_child_(nullptr)
|
||||
, right_child_(nullptr)
|
||||
, height_(0)
|
||||
{ }
|
||||
|
||||
// Prevent copying.
|
||||
Node(const Node&) = delete;
|
||||
Node& operator=(const Node&) = delete;
|
||||
|
||||
/**
|
||||
* Return the name of the queue that contains this element.
|
||||
*
|
||||
* The node must be contained in a queue.
|
||||
* This function takes time O(log(n)).
|
||||
*/
|
||||
NameType find() const
|
||||
{
|
||||
const Node* node = this;
|
||||
while (node->parent_) {
|
||||
node = node->parent_;
|
||||
}
|
||||
assert(node->owner_);
|
||||
return node->owner_->name();
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the priority of this element.
|
||||
*
|
||||
* The node must be contained in a queue.
|
||||
* This function takes time O(1).
|
||||
*/
|
||||
PrioType prio() const
|
||||
{
|
||||
assert(height_ != 0);
|
||||
return prio_;
|
||||
}
|
||||
|
||||
/**
|
||||
* Change the priority of this element.
|
||||
*
|
||||
* The node must be contained in a queue.
|
||||
* This function takes time O(log(n)).
|
||||
*/
|
||||
void set_prio(PrioType new_prio)
|
||||
{
|
||||
assert(height_ != 0);
|
||||
|
||||
prio_ = new_prio;
|
||||
|
||||
Node* node = this;
|
||||
while (node) {
|
||||
PrioType min_prio = node->prio_;
|
||||
DataType min_data = node->data_;
|
||||
for (Node* child : { node->left_child_,
|
||||
node->right_child_ }) {
|
||||
if (child && child->min_prio_ < min_prio) {
|
||||
min_prio = child->min_prio_;
|
||||
min_data = child->min_data_;
|
||||
}
|
||||
}
|
||||
node->min_prio_ = min_prio;
|
||||
node->min_data_ = min_data;
|
||||
node = node->parent_;
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
PrioType prio_;
|
||||
DataType data_;
|
||||
PrioType min_prio_;
|
||||
DataType min_data_;
|
||||
ConcatenableQueue* owner_;
|
||||
Node* parent_;
|
||||
Node* left_child_;
|
||||
Node* right_child_;
|
||||
unsigned int height_;
|
||||
|
||||
friend class ConcatenableQueue;
|
||||
};
|
||||
|
||||
/** Construct an empty queue. */
|
||||
explicit ConcatenableQueue(const NameType& name)
|
||||
: name_(name)
|
||||
, tree_(nullptr)
|
||||
, first_node_(nullptr)
|
||||
, first_subqueue_(nullptr)
|
||||
, next_subqueue_(nullptr)
|
||||
{ }
|
||||
|
||||
// Prevent copying.
|
||||
ConcatenableQueue(const ConcatenableQueue&) = delete;
|
||||
ConcatenableQueue& operator=(const ConcatenableQueue&) = delete;
|
||||
|
||||
/** Return the name of this queue. */
|
||||
NameType name() const
|
||||
{
|
||||
return name_;
|
||||
}
|
||||
|
||||
/**
|
||||
* Insert the specified Node into the queue.
|
||||
*
|
||||
* The Node instance must be unused (not yet contained in any queue).
|
||||
*
|
||||
* The queue must be empty. Only one element can be inserted into
|
||||
* a queue in this way. Larger queues can only result from merging.
|
||||
*
|
||||
* The queue stores a pointer to the Node instance. The Node instance must
|
||||
* remain valid for as long as it is contained in any queue.
|
||||
*
|
||||
* This function takes time O(1).
|
||||
*/
|
||||
void insert(Node* node, PrioType prio, const DataType& data)
|
||||
{
|
||||
assert(! tree_);
|
||||
assert(! first_node_);
|
||||
assert(node->height_ == 0);
|
||||
assert(! node->parent_);
|
||||
assert(! node->left_child_);
|
||||
assert(! node->right_child_);
|
||||
|
||||
node->prio_ = prio;
|
||||
node->data_ = data;
|
||||
node->min_prio_ = prio;
|
||||
node->min_data_ = data;
|
||||
node->owner_ = this;
|
||||
node->height_ = 1;
|
||||
|
||||
tree_ = node;
|
||||
first_node_ = node;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the minimum priority of any element in the queue.
|
||||
*
|
||||
* The queue must be non-empty.
|
||||
* This function takes time O(1).
|
||||
*/
|
||||
PrioType min_prio() const
|
||||
{
|
||||
assert(tree_);
|
||||
return tree_->min_prio_;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the element with minimum priority.
|
||||
*
|
||||
* The queue must be non-empty.
|
||||
* This function takes time O(1).
|
||||
*/
|
||||
DataType min_elem() const
|
||||
{
|
||||
assert(tree_);
|
||||
return tree_->min_data_;
|
||||
}
|
||||
|
||||
/**
|
||||
* Merge the specified queues.
|
||||
*
|
||||
* This queue instance must inititially be empty.
|
||||
* All specified sub-queues must initially be non-empty.
|
||||
*
|
||||
* This function removes all elements from the specified sub-queues
|
||||
* and adds them to this queue.
|
||||
*
|
||||
* After merging, this queue retains references to the sub-queues.
|
||||
* These may be used later to split (undo the merge step).
|
||||
*
|
||||
* This function takes time O(n_sub_queues * log(n)).
|
||||
*/
|
||||
template <typename QueueIterator>
|
||||
void merge(QueueIterator sub_queues_begin, QueueIterator sub_queues_end)
|
||||
{
|
||||
assert(! tree_);
|
||||
assert(! first_node_);
|
||||
assert(sub_queues_begin != sub_queues_end);
|
||||
|
||||
// Pull the root node from the first sub-queue.
|
||||
ConcatenableQueue* sub = *sub_queues_begin;
|
||||
assert(sub->tree_);
|
||||
Node* merged_tree = sub->tree_;
|
||||
sub->tree_ = nullptr;
|
||||
|
||||
// Clear owner pointer from tree.
|
||||
assert(merged_tree->owner_ == sub);
|
||||
merged_tree->owner_ = nullptr;
|
||||
|
||||
// Copy first node to this queue.
|
||||
assert(sub->first_node_);
|
||||
first_node_ = sub->first_node_;
|
||||
|
||||
// Build linked list of sub-queues, starting with the first sub-queue.
|
||||
assert(! sub->next_subqueue_);
|
||||
first_subqueue_ = sub;
|
||||
|
||||
// Merge remaining sub-queues.
|
||||
QueueIterator it = sub_queues_begin;
|
||||
++it;
|
||||
while (it != sub_queues_end) {
|
||||
ConcatenableQueue* prev_sub = sub;
|
||||
sub = *it;
|
||||
|
||||
// Clear owner pointer and root node from the sub-queue.
|
||||
assert(sub->tree_);
|
||||
assert(sub->tree_->owner_ == sub);
|
||||
sub->tree_->owner_ = nullptr;
|
||||
|
||||
// Merge sub-queue tree into our current tree.
|
||||
merged_tree = merge_tree(merged_tree, sub->tree_);
|
||||
|
||||
// Clear root pointer from sub-queue.
|
||||
sub->tree_ = nullptr;
|
||||
|
||||
// Add sub-queue to linked list of sub-queues.
|
||||
assert(! sub->next_subqueue_);
|
||||
prev_sub->next_subqueue_ = sub;
|
||||
|
||||
++it;
|
||||
}
|
||||
|
||||
// Put owner pointer in the root node of the tree.
|
||||
merged_tree->owner_ = this;
|
||||
tree_ = merged_tree;
|
||||
}
|
||||
|
||||
/**
|
||||
* Undo the merge step that created this queue.
|
||||
*
|
||||
* Remove all elements from this queue and put them back in
|
||||
* the sub-queues from which they came.
|
||||
*
|
||||
* After splitting, this queue will be empty.
|
||||
*
|
||||
* This function takes time O(n_sub_queues * log(n)).
|
||||
*/
|
||||
void split()
|
||||
{
|
||||
assert(tree_);
|
||||
assert(first_subqueue_);
|
||||
|
||||
// Clear the owner pointer from the root node.
|
||||
assert(tree_->owner_ == this);
|
||||
tree_->owner_ = nullptr;
|
||||
|
||||
Node* rtree = tree_;
|
||||
ConcatenableQueue* sub = first_subqueue_;
|
||||
|
||||
// Repeatedly split the tree to reconstruct each sub-queue.
|
||||
while (sub->next_subqueue_) {
|
||||
ConcatenableQueue* next_sub = sub->next_subqueue_;
|
||||
sub->next_subqueue_ = nullptr;
|
||||
|
||||
// Split the tree on the first node of the next sub-queue.
|
||||
assert(next_sub->first_node_);
|
||||
Node* ltree;
|
||||
std::tie(ltree, rtree) = split_tree(next_sub->first_node_);
|
||||
|
||||
// Put the left half of the tree in the current sub-queue.
|
||||
sub->tree_ = ltree;
|
||||
ltree->owner_ = sub;
|
||||
|
||||
// Continue to next sub-queue.
|
||||
sub = next_sub;
|
||||
}
|
||||
|
||||
// Put the remaining part of the tree in the last sub-queue.
|
||||
rtree->owner_ = sub;
|
||||
sub->tree_ = rtree;
|
||||
|
||||
// Clean up this instance.
|
||||
tree_ = nullptr;
|
||||
first_node_ = nullptr;
|
||||
first_subqueue_ = nullptr;
|
||||
}
|
||||
|
||||
private:
|
||||
/**
|
||||
* Merge two trees and rebalance.
|
||||
*
|
||||
* This function takes time O(log(n)).
|
||||
*/
|
||||
static Node* merge_tree(Node* ltree, Node* rtree)
|
||||
{
|
||||
// Remove the last node from the left tree.
|
||||
Node* node;
|
||||
std::tie(ltree, node) = split_last_node(ltree);
|
||||
|
||||
// Join the trees.
|
||||
return join(ltree, node, rtree);
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove the last node from the tree and rebalance.
|
||||
*
|
||||
* This function takes time O(log(n)).
|
||||
*/
|
||||
static std::tuple<Node*, Node*> split_last_node(Node* tree)
|
||||
{
|
||||
assert(tree);
|
||||
|
||||
/*
|
||||
* Descend down the right spine of the tree to find the last node.
|
||||
*
|
||||
* tree
|
||||
* / \
|
||||
* X
|
||||
* / \
|
||||
* X <-- parent
|
||||
* / \
|
||||
* last_node
|
||||
* /
|
||||
* X <-- ltree
|
||||
*/
|
||||
|
||||
// Find last node.
|
||||
Node* last_node = tree;
|
||||
while (last_node->right_child_) {
|
||||
last_node = last_node->right_child_;
|
||||
}
|
||||
|
||||
// Detach its left subtree.
|
||||
Node* ltree = last_node->left_child_;
|
||||
last_node->left_child_ = nullptr;
|
||||
if (ltree) {
|
||||
ltree->parent_ = nullptr;
|
||||
}
|
||||
|
||||
// Detach from the parent.
|
||||
Node* parent = last_node->parent_;
|
||||
last_node->parent_ = nullptr;
|
||||
if (parent) {
|
||||
parent->right_child_ = nullptr;
|
||||
}
|
||||
|
||||
// Reconstruct along the right spine of the original tree.
|
||||
while (parent) {
|
||||
|
||||
// Step to parent.
|
||||
Node* cur = parent;
|
||||
parent = cur->parent_;
|
||||
|
||||
// Detach from its own parent.
|
||||
cur->parent_ = nullptr;
|
||||
if (parent) {
|
||||
parent->right_child_ = nullptr;
|
||||
}
|
||||
|
||||
// Join the current node to the reconstructed tree.
|
||||
ltree = join(cur->left_child_, cur, ltree);
|
||||
}
|
||||
|
||||
return std::make_tuple(ltree, last_node);
|
||||
}
|
||||
|
||||
/**
|
||||
* Split a tree on a specified node.
|
||||
*
|
||||
* Two new trees will be constructed.
|
||||
* Leaf nodes to the left of "split_node" will go to the left tree.
|
||||
* Leaf nodes to the right of "split_node", and "split_node" itself,
|
||||
* will go to the right tree.
|
||||
*
|
||||
* This function takes time O(log(n)).
|
||||
*/
|
||||
static std::tuple<Node*, Node*> split_tree(Node* split_node)
|
||||
{
|
||||
assert(split_node);
|
||||
|
||||
// All left-descendants of "split_node" belong in the left tree.
|
||||
// Detach it from "split_node".
|
||||
Node* ltree = split_node->left_child_;
|
||||
split_node->left_child_ = nullptr;
|
||||
if (ltree) {
|
||||
ltree->parent_ = nullptr;
|
||||
}
|
||||
|
||||
// Start with an empty right tree.
|
||||
Node* rtree = nullptr;
|
||||
|
||||
// Start at "split_node".
|
||||
// Take note of the fact that the "cut" between the two halves of
|
||||
// the tree runs down its left branch, since "split_node" itself
|
||||
// belongs to the right tree.
|
||||
Node* node = split_node;
|
||||
bool left_branch = true;
|
||||
|
||||
// Work upwards through the tree, assigning each node either to
|
||||
// the new left tree or the new right tree.
|
||||
//
|
||||
// This loop runs for O(log(n)) iterations.
|
||||
// Each iteration calls join() once, taking time proportional
|
||||
// to the difference in height between the intermediate trees.
|
||||
// The total run time of all join() calls together is O(log(n)).
|
||||
while (node) {
|
||||
|
||||
// Detach the current node from its parent.
|
||||
// Remember to which branch of the parent it was attached.
|
||||
Node* parent = node->parent_;
|
||||
node->parent_ = nullptr;
|
||||
bool parent_left_branch = true;
|
||||
if (parent) {
|
||||
parent_left_branch = (parent->left_child_ == node);
|
||||
if (parent_left_branch) {
|
||||
parent->left_child_ = nullptr;
|
||||
} else {
|
||||
assert(parent->right_child_ == node);
|
||||
parent->right_child_ = nullptr;
|
||||
}
|
||||
}
|
||||
|
||||
// Join the current node and its remaining descendents either
|
||||
// to the left tree or to the right tree.
|
||||
if (left_branch) {
|
||||
/*
|
||||
* "node" belongs to the right tree.
|
||||
* Join like this:
|
||||
*
|
||||
* (node) <--- new rtree
|
||||
* / \
|
||||
* (rtree) (node->right_child)
|
||||
*/
|
||||
assert(! node->left_child_);
|
||||
rtree = join(rtree, node, node->right_child_);
|
||||
} else {
|
||||
/*
|
||||
* "node" belongs to the left tree.
|
||||
* Join like this:
|
||||
*
|
||||
* (node) <--- new ltree
|
||||
* / \
|
||||
* (node->left_child) (ltree)
|
||||
*/
|
||||
assert(! node->right_child_);
|
||||
ltree = join(node->left_child_, node, ltree);
|
||||
}
|
||||
|
||||
// Continue with the parent node.
|
||||
node = parent;
|
||||
left_branch = parent_left_branch;
|
||||
}
|
||||
|
||||
// Done. All that remains of the old tree is the two new halves.
|
||||
return std::make_tuple(ltree, rtree);
|
||||
}
|
||||
|
||||
/** Return node height, or 0 if node == nullptr. */
|
||||
static unsigned int get_node_height(const Node* node)
|
||||
{
|
||||
return node ? node->height_ : 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Repair the height and min-priority information of a node
|
||||
* after modifying its children.
|
||||
*
|
||||
* After repairing a node, it is typically necessary to also repair
|
||||
* its ancestors.
|
||||
*/
|
||||
static void repair_node(Node* node)
|
||||
{
|
||||
Node* lchild = node->left_child_;
|
||||
Node* rchild = node->right_child_;
|
||||
|
||||
// Repair node height.
|
||||
node->height_ = 1 + std::max(get_node_height(lchild),
|
||||
get_node_height(rchild));
|
||||
|
||||
// Repair min-priority.
|
||||
PrioType min_prio = node->prio_;
|
||||
DataType min_data = node->data_;
|
||||
for (Node* child : { lchild, rchild }) {
|
||||
if (child && child->min_prio_ < min_prio) {
|
||||
min_prio = child->min_prio_;
|
||||
min_data = child->min_data_;
|
||||
}
|
||||
}
|
||||
node->min_prio_ = min_prio;
|
||||
node->min_data_ = min_data;
|
||||
}
|
||||
|
||||
/** Rotate the subtree to the left and return the new root of the subtree. */
|
||||
static Node* rotate_left(Node* node)
|
||||
{
|
||||
/*
|
||||
* N C
|
||||
* / \ / \
|
||||
* A C ---> N D
|
||||
* / \ / \
|
||||
* B D A B
|
||||
*/
|
||||
Node* parent = node->parent_;
|
||||
Node* new_top = node->right_child_;
|
||||
assert(new_top);
|
||||
|
||||
Node* nb = new_top->left_child_;
|
||||
node->right_child_ = nb;
|
||||
if (nb) {
|
||||
nb->parent_ = node;
|
||||
}
|
||||
|
||||
new_top->left_child_ = node;
|
||||
node->parent_ = new_top;
|
||||
|
||||
new_top->parent_ = parent;
|
||||
|
||||
if (parent) {
|
||||
if (parent->left_child_ == node) {
|
||||
parent->left_child_ = new_top;
|
||||
} else {
|
||||
assert(parent->right_child_ == node);
|
||||
parent->right_child_ = new_top;
|
||||
}
|
||||
}
|
||||
|
||||
repair_node(node);
|
||||
repair_node(new_top);
|
||||
|
||||
return new_top;
|
||||
}
|
||||
|
||||
/** Rotate the subtree to the right and return the new root of the subtree. */
|
||||
static Node* rotate_right(Node* node)
|
||||
{
|
||||
/*
|
||||
* N B
|
||||
* / \ / \
|
||||
* B D ---> A N
|
||||
* / \ / \
|
||||
* A C C D
|
||||
*/
|
||||
Node* parent = node->parent_;
|
||||
Node* new_top = node->left_child_;
|
||||
assert(new_top);
|
||||
|
||||
Node* nc = new_top->right_child_;
|
||||
node->left_child_ = nc;
|
||||
if (nc) {
|
||||
nc->parent_ = node;
|
||||
}
|
||||
|
||||
new_top->right_child_ = node;
|
||||
node->parent_ = new_top;
|
||||
|
||||
new_top->parent_ = parent;
|
||||
|
||||
if (parent) {
|
||||
if (parent->left_child_ == node) {
|
||||
parent->left_child_ = new_top;
|
||||
} else {
|
||||
assert(parent->right_child_ == node);
|
||||
parent->right_child_ = new_top;
|
||||
}
|
||||
}
|
||||
|
||||
repair_node(node);
|
||||
repair_node(new_top);
|
||||
|
||||
return new_top;
|
||||
}
|
||||
|
||||
/**
|
||||
* Join a left subtree, middle node and right subtree together.
|
||||
*
|
||||
* The left subtree is higher than the right subtree.
|
||||
*/
|
||||
static Node* join_right(Node* ltree, Node* node, Node* rtree)
|
||||
{
|
||||
assert(ltree);
|
||||
unsigned int lh = ltree->height_;
|
||||
unsigned int rh = get_node_height(rtree);
|
||||
assert(lh > rh + 1);
|
||||
|
||||
/*
|
||||
* Descend down the right spine of "ltree".
|
||||
* Stop at a node with compatible height, then insert "node"
|
||||
* and attach "rtree".
|
||||
*
|
||||
* ltree
|
||||
* / \
|
||||
* X
|
||||
* / \
|
||||
* X <-- cur
|
||||
* / \
|
||||
* node
|
||||
* / \
|
||||
* X rtree
|
||||
*/
|
||||
|
||||
// Descend to a point with compatible height.
|
||||
Node* cur = ltree;
|
||||
while (cur->right_child_ && (cur->right_child_->height_ > rh + 1)) {
|
||||
cur = cur->right_child_;
|
||||
}
|
||||
|
||||
// Insert "node" and "rtree".
|
||||
node->left_child_ = cur->right_child_;
|
||||
node->right_child_ = rtree;
|
||||
if (node->left_child_) {
|
||||
node->left_child_->parent_ = node;
|
||||
}
|
||||
if (rtree) {
|
||||
rtree->parent_ = node;
|
||||
}
|
||||
cur->right_child_ = node;
|
||||
node->parent_ = cur;
|
||||
|
||||
// A double rotation may be necessary.
|
||||
if ((! cur->left_child_) || (cur->left_child_->height_ <= rh)) {
|
||||
node = rotate_right(node);
|
||||
cur = rotate_left(cur);
|
||||
} else {
|
||||
repair_node(node);
|
||||
repair_node(cur);
|
||||
}
|
||||
|
||||
// Ascend from "cur" to the root of the tree; repair and rebalance.
|
||||
while (cur->parent_) {
|
||||
cur = cur->parent_;
|
||||
assert(cur->left_child_);
|
||||
assert(cur->right_child_);
|
||||
|
||||
if (cur->left_child_->height_ + 1 < cur->right_child_->height_) {
|
||||
cur = rotate_left(cur);
|
||||
} else {
|
||||
repair_node(cur);
|
||||
}
|
||||
}
|
||||
|
||||
return cur;
|
||||
}
|
||||
|
||||
/**
|
||||
* Join a left subtree, middle node and right subtree together.
|
||||
*
|
||||
* The right subtree is higher than the left subtree.
|
||||
*/
|
||||
static Node* join_left(Node* ltree, Node* node, Node* rtree)
|
||||
{
|
||||
assert(rtree);
|
||||
unsigned int lh = get_node_height(ltree);
|
||||
unsigned int rh = rtree->height_;
|
||||
assert(lh + 1 < rh);
|
||||
|
||||
/*
|
||||
* Descend down the left spine of "rtree".
|
||||
* Stop at a node with compatible height, then insert "node"
|
||||
* and attach "ltree".
|
||||
*
|
||||
* rtree
|
||||
* / \
|
||||
* X
|
||||
* / \
|
||||
* cur --> X
|
||||
* / \
|
||||
* node
|
||||
* / \
|
||||
* ltree X
|
||||
*/
|
||||
|
||||
// Descend to a point with compatible height.
|
||||
Node* cur = rtree;
|
||||
while (cur->left_child_ && (cur->left_child_->height_ > lh + 1)) {
|
||||
cur = cur->left_child_;
|
||||
}
|
||||
|
||||
// Insert "node" and "ltree".
|
||||
node->left_child_ = ltree;
|
||||
node->right_child_ = cur->left_child_;
|
||||
if (ltree) {
|
||||
ltree->parent_ = node;
|
||||
}
|
||||
if (node->right_child_) {
|
||||
node->right_child_->parent_ = node;
|
||||
}
|
||||
cur->left_child_ = node;
|
||||
node->parent_ = cur;
|
||||
|
||||
// A double rotation may be necessary.
|
||||
if ((! cur->right_child_) || (cur->right_child_->height_ <= lh)) {
|
||||
node = rotate_left(node);
|
||||
cur = rotate_right(cur);
|
||||
} else {
|
||||
repair_node(node);
|
||||
repair_node(cur);
|
||||
}
|
||||
|
||||
// Ascend from "cur" to the root of the tree; repair and rebalance.
|
||||
while (cur->parent_) {
|
||||
cur = cur->parent_;
|
||||
assert(cur->left_child_);
|
||||
assert(cur->right_child_);
|
||||
|
||||
if (cur->left_child_->height_ > cur->right_child_->height_ + 1) {
|
||||
cur = rotate_right(cur);
|
||||
} else {
|
||||
repair_node(cur);
|
||||
}
|
||||
}
|
||||
|
||||
return cur;
|
||||
}
|
||||
|
||||
/**
|
||||
* Join a left subtree, middle node and right subtree together.
|
||||
*
|
||||
* The left or right subtree may initially be a child of the middle
|
||||
* node; such links will be broken as needed.
|
||||
*
|
||||
* The left and right subtrees must be consistent, semi-balanced trees.
|
||||
* Height and priority of the middle node may initially be inconsistent;
|
||||
* this function will repair it.
|
||||
*
|
||||
* @return Root node of the joined tree.
|
||||
*/
|
||||
static Node* join(Node* ltree, Node* node, Node* rtree)
|
||||
{
|
||||
unsigned int lh = get_node_height(ltree);
|
||||
unsigned int rh = get_node_height(rtree);
|
||||
|
||||
if (lh > rh + 1) {
|
||||
assert(ltree);
|
||||
ltree->parent_ = nullptr;
|
||||
return join_right(ltree, node, rtree);
|
||||
} else if (lh + 1 < rh) {
|
||||
assert(rtree);
|
||||
rtree->parent_ = nullptr;
|
||||
return join_left(ltree, node, rtree);
|
||||
} else {
|
||||
/*
|
||||
* Subtree heights are compatible. Just join them.
|
||||
*
|
||||
* node
|
||||
* / \
|
||||
* ltree rtree
|
||||
* / \ / \
|
||||
*/
|
||||
node->parent_ = nullptr;
|
||||
node->left_child_ = ltree;
|
||||
if (ltree) {
|
||||
ltree->parent_ = node;
|
||||
}
|
||||
node->right_child_ = rtree;
|
||||
if (rtree) {
|
||||
rtree->parent_ = node;
|
||||
}
|
||||
repair_node(node);
|
||||
return node;
|
||||
}
|
||||
}
|
||||
|
||||
/** Name of this queue. */
|
||||
const NameType name_;
|
||||
|
||||
/** Root node of the tree, or "nullptr" if the queue is empty. */
|
||||
Node* tree_;
|
||||
|
||||
/** Left-most node that belongs to this queue. */
|
||||
Node* first_node_;
|
||||
|
||||
/** Pointer to first sub-queue that got merged into this instance. */
|
||||
ConcatenableQueue* first_subqueue_;
|
||||
|
||||
/** Linked list of sub-queues that were merged to build this queue. */
|
||||
ConcatenableQueue* next_subqueue_;
|
||||
};
|
||||
|
||||
|
||||
} // namespace mwmatching
|
||||
|
||||
#endif // MWMATCHING_CONCATENABLE_QUEUE_H_
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,266 @@
|
|||
/**
|
||||
* Priority queue data structure.
|
||||
*/
|
||||
|
||||
#ifndef MWMATCHING_PRIORITY_QUEUE_H_
|
||||
#define MWMATCHING_PRIORITY_QUEUE_H_
|
||||
|
||||
#include <cassert>
|
||||
#include <limits>
|
||||
#include <vector>
|
||||
|
||||
|
||||
namespace mwmatching {
|
||||
|
||||
|
||||
/**
|
||||
* Min-priority queue implemented as a binary heap.
|
||||
*
|
||||
* Elements in a heap have a priority and associated "data".
|
||||
*
|
||||
* The following operations can be done efficiently:
|
||||
* - Insert an element into the queue.
|
||||
* - Remove an element from the queue.
|
||||
* - Change the priority of a given element.
|
||||
* - Find the element with lowest priority in the queue.
|
||||
*/
|
||||
template <typename PrioType,
|
||||
typename DataType>
|
||||
class PriorityQueue
|
||||
{
|
||||
public:
|
||||
typedef unsigned int IndexType;
|
||||
static constexpr IndexType INVALID_INDEX =
|
||||
std::numeric_limits<IndexType>::max();
|
||||
|
||||
/**
|
||||
* A Node instance represents an element in a PriorityQueue.
|
||||
*
|
||||
* A Node instance must remain valid while it is contained in a queue.
|
||||
* The containing queue holds a pointer to the Node.
|
||||
*
|
||||
* A Node instance may be destructed if it is not contained in any queue.
|
||||
* Alternatively, a Node instance may be destructed after its containing
|
||||
* queue instance has been destructed.
|
||||
*/
|
||||
class Node
|
||||
{
|
||||
public:
|
||||
/** Construct an invalid node, not contained in any queue. */
|
||||
Node()
|
||||
: index_(INVALID_INDEX)
|
||||
{ }
|
||||
|
||||
// Prevent copying.
|
||||
Node(const Node&) = delete;
|
||||
Node& operator=(const Node&) = delete;
|
||||
|
||||
/** Return true if this node is contained in a queue. */
|
||||
bool valid() const
|
||||
{
|
||||
return (index_ != INVALID_INDEX);
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the priority of this node in the queue.
|
||||
*
|
||||
* The node must be contained in a queue.
|
||||
* This function takes time O(1).
|
||||
*/
|
||||
PrioType prio() const
|
||||
{
|
||||
assert(index_ != INVALID_INDEX);
|
||||
return prio_;
|
||||
}
|
||||
|
||||
private:
|
||||
IndexType index_;
|
||||
PrioType prio_;
|
||||
DataType data_;
|
||||
|
||||
friend class PriorityQueue;
|
||||
};
|
||||
|
||||
/** Construct an empty queue. */
|
||||
PriorityQueue()
|
||||
{ }
|
||||
|
||||
// Prevent copying.
|
||||
PriorityQueue(const PriorityQueue&) = delete;
|
||||
PriorityQueue& operator=(const PriorityQueue&) = delete;
|
||||
|
||||
/**
|
||||
* Remove all elements from the queue.
|
||||
*
|
||||
* This function takes time O(n).
|
||||
*/
|
||||
void clear()
|
||||
{
|
||||
for (Node* node : heap_) {
|
||||
node->index_ = INVALID_INDEX;
|
||||
}
|
||||
heap_.clear();
|
||||
}
|
||||
|
||||
/** Return true if the queue is empty. */
|
||||
bool empty() const
|
||||
{
|
||||
return heap_.empty();
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the minimum priority of any element in the queue.
|
||||
*
|
||||
* The queue must be non-empty.
|
||||
* This function takes time O(1).
|
||||
*/
|
||||
PrioType min_prio() const
|
||||
{
|
||||
assert(! heap_.empty());
|
||||
Node* top = heap_.front();
|
||||
return top->prio_;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the element with minimum priority.
|
||||
*
|
||||
* The queue must be non-empty.
|
||||
* This function takes time O(1).
|
||||
*/
|
||||
DataType min_elem() const
|
||||
{
|
||||
assert(! heap_.empty());
|
||||
Node* top = heap_.front();
|
||||
return top->data_;
|
||||
}
|
||||
|
||||
/**
|
||||
* Insert the given node into the queue with associated data.
|
||||
*
|
||||
* The node must not currently be contained in any queue.
|
||||
* This function takes amortized time O(log(n)).
|
||||
*/
|
||||
void insert(Node* node, PrioType prio, const DataType& data)
|
||||
{
|
||||
assert(node->index_ == INVALID_INDEX);
|
||||
|
||||
node->index_ = heap_.size();
|
||||
node->prio_ = prio;
|
||||
node->data_ = data;
|
||||
|
||||
heap_.push_back(node);
|
||||
sift_up(node->index_);
|
||||
}
|
||||
|
||||
/**
|
||||
* Update priority of an existing node.
|
||||
*
|
||||
* This function takes time O(log(n)).
|
||||
*/
|
||||
void set_prio(Node* node, PrioType prio)
|
||||
{
|
||||
IndexType index = node->index_;
|
||||
assert(index != INVALID_INDEX);
|
||||
assert(heap_[index] == node);
|
||||
|
||||
PrioType prev_prio = node->prio_;
|
||||
node->prio_ = prio;
|
||||
|
||||
if (prio < prev_prio) {
|
||||
sift_up(index);
|
||||
} else if (prio > prev_prio) {
|
||||
sift_down(index);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove the specified element from the queue.
|
||||
*
|
||||
* This function takes time O(log(n)).
|
||||
*/
|
||||
void remove(Node* node)
|
||||
{
|
||||
IndexType index = node->index_;
|
||||
assert(index != INVALID_INDEX);
|
||||
assert(heap_[index] == node);
|
||||
|
||||
node->index_ = INVALID_INDEX;
|
||||
|
||||
Node* move_node = heap_.back();
|
||||
heap_.pop_back();
|
||||
|
||||
if (index < heap_.size()) {
|
||||
heap_[index] = move_node;
|
||||
move_node->index_ = index;
|
||||
if (move_node->prio_ < node->prio_) {
|
||||
sift_up(index);
|
||||
} else if (move_node->prio_ > node->prio_) {
|
||||
sift_down(index);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
/** Repair the heap along an ascending path to the root. */
|
||||
void sift_up(IndexType index)
|
||||
{
|
||||
Node* node = heap_[index];
|
||||
PrioType prio = node->prio_;
|
||||
|
||||
while (index > 0) {
|
||||
IndexType next_index = (index - 1) / 2;
|
||||
Node* next_node = heap_[next_index];
|
||||
if (next_node->prio_ <= prio) {
|
||||
break;
|
||||
}
|
||||
heap_[index] = next_node;
|
||||
next_node->index_ = index;
|
||||
index = next_index;
|
||||
}
|
||||
|
||||
node->index_ = index;
|
||||
heap_[index] = node;
|
||||
}
|
||||
|
||||
/** Repair the heap along a descending path. */
|
||||
void sift_down(IndexType index)
|
||||
{
|
||||
Node* node = heap_[index];
|
||||
PrioType prio = node->prio_;
|
||||
|
||||
IndexType num_elem = heap_.size();
|
||||
|
||||
while (index < num_elem / 2) {
|
||||
IndexType next_index = 2 * index + 1;
|
||||
Node* next_node = heap_[next_index];
|
||||
|
||||
if (next_index + 1 < num_elem) {
|
||||
Node* tmp_node = heap_[next_index + 1];
|
||||
if (tmp_node->prio_ <= next_node->prio_) {
|
||||
++next_index;
|
||||
next_node = tmp_node;
|
||||
}
|
||||
}
|
||||
|
||||
if (next_node->prio_ >= prio) {
|
||||
break;
|
||||
}
|
||||
|
||||
heap_[index] = next_node;
|
||||
next_node->index_ = index;
|
||||
|
||||
index = next_index;
|
||||
}
|
||||
|
||||
heap_[index] = node;
|
||||
node->index_ = index;
|
||||
}
|
||||
|
||||
/** Heap data structure. */
|
||||
std::vector<Node*> heap_;
|
||||
};
|
||||
|
||||
|
||||
} // namespace mwmatching
|
||||
|
||||
#endif // MWMATCHING_PRIORITY_QUEUE_H_
|
|
@ -13,7 +13,7 @@
|
|||
#include <utility>
|
||||
#include <vector>
|
||||
|
||||
#include "mwmatching.h"
|
||||
#include "mwmatching.hpp"
|
||||
|
||||
|
||||
namespace { // anonymous namespace
|
||||
|
|
|
@ -0,0 +1,403 @@
|
|||
/*
|
||||
* Unit tests for ConcatenableQueue data structure.
|
||||
*
|
||||
* Depends on the Boost.Test unit test framework.
|
||||
* Tested with Boost v1.74, available from https://www.boost.org/
|
||||
*/
|
||||
|
||||
#include <algorithm>
|
||||
#include <memory>
|
||||
#include <random>
|
||||
#include <set>
|
||||
#include <string>
|
||||
#include <unordered_map>
|
||||
#include <vector>
|
||||
|
||||
#define BOOST_TEST_MODULE datastruct
|
||||
#include <boost/test/unit_test.hpp>
|
||||
|
||||
#include "concatenable_queue.hpp"
|
||||
|
||||
|
||||
BOOST_AUTO_TEST_SUITE(test_concatenable_queue)
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_single)
|
||||
{
|
||||
using Queue = mwmatching::ConcatenableQueue<int, std::string, std::string>;
|
||||
Queue q("Q");
|
||||
BOOST_TEST(q.name() == std::string("Q"));
|
||||
|
||||
Queue::Node n;
|
||||
q.insert(&n, 4, "a");
|
||||
|
||||
BOOST_TEST(n.find() == std::string("Q"));
|
||||
BOOST_TEST(n.prio() == 4);
|
||||
BOOST_TEST(q.min_prio() == 4);
|
||||
BOOST_TEST(q.min_elem() == std::string("a"));
|
||||
|
||||
n.set_prio(8);
|
||||
|
||||
BOOST_TEST(n.prio() == 8);
|
||||
BOOST_TEST(n.find() == std::string("Q"));
|
||||
BOOST_TEST(q.min_prio() == 8);
|
||||
BOOST_TEST(q.min_elem() == std::string("a"));
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_simple)
|
||||
{
|
||||
using Queue = mwmatching::ConcatenableQueue<int, std::string, char>;
|
||||
Queue::Node n1, n2, n3, n4, n5;
|
||||
|
||||
Queue q1("A");
|
||||
q1.insert(&n1, 5, 'a');
|
||||
|
||||
Queue q2("B");
|
||||
q2.insert(&n2, 6, 'b');
|
||||
|
||||
Queue q3("C");
|
||||
q3.insert(&n3, 7, 'c');
|
||||
|
||||
Queue q4("D");
|
||||
q4.insert(&n4, 4, 'd');
|
||||
|
||||
Queue q5("E");
|
||||
q5.insert(&n5, 3, 'e');
|
||||
|
||||
auto m345 = {&q3, &q4, &q5};
|
||||
Queue q345("P");
|
||||
q345.merge(m345.begin(), m345.end());
|
||||
|
||||
BOOST_TEST(n1.find() == std::string("A"));
|
||||
BOOST_TEST(n2.find() == std::string("B"));
|
||||
BOOST_TEST(n3.find() == std::string("P"));
|
||||
BOOST_TEST(n4.find() == std::string("P"));
|
||||
BOOST_TEST(n5.find() == std::string("P"));
|
||||
BOOST_TEST(q345.min_prio() == 3);
|
||||
BOOST_TEST(q345.min_elem() == 'e');
|
||||
|
||||
n5.set_prio(6);
|
||||
BOOST_TEST(q345.min_prio() == 4);
|
||||
BOOST_TEST(q345.min_elem() == 'd');
|
||||
|
||||
auto m12 = {&q1, &q2};
|
||||
Queue q12("Q");
|
||||
q12.merge(m12.begin(), m12.end());
|
||||
|
||||
BOOST_TEST(n1.find() == std::string("Q"));
|
||||
BOOST_TEST(n2.find() == std::string("Q"));
|
||||
BOOST_TEST(q12.min_prio() == 5);
|
||||
BOOST_TEST(q12.min_elem() == 'a');
|
||||
|
||||
auto m12345 = {&q12, &q345};
|
||||
Queue q12345("R");
|
||||
q12345.merge(m12345.begin(), m12345.end());
|
||||
|
||||
BOOST_TEST(n1.find() == std::string("R"));
|
||||
BOOST_TEST(n2.find() == std::string("R"));
|
||||
BOOST_TEST(n3.find() == std::string("R"));
|
||||
BOOST_TEST(n4.find() == std::string("R"));
|
||||
BOOST_TEST(n5.find() == std::string("R"));
|
||||
BOOST_TEST(q12345.min_prio() == 4);
|
||||
BOOST_TEST(q12345.min_elem() == 'd');
|
||||
|
||||
n4.set_prio(8);
|
||||
|
||||
BOOST_TEST(q12345.min_prio() == 5);
|
||||
BOOST_TEST(q12345.min_elem() == 'a');
|
||||
|
||||
n3.set_prio(2);
|
||||
|
||||
BOOST_TEST(q12345.min_prio() == 2);
|
||||
BOOST_TEST(q12345.min_elem() == 'c');
|
||||
|
||||
q12345.split();
|
||||
|
||||
BOOST_TEST(n1.find() == std::string("Q"));
|
||||
BOOST_TEST(n2.find() == std::string("Q"));
|
||||
BOOST_TEST(n3.find() == std::string("P"));
|
||||
BOOST_TEST(n4.find() == std::string("P"));
|
||||
BOOST_TEST(n5.find() == std::string("P"));
|
||||
BOOST_TEST(q12.min_prio() == 5);
|
||||
BOOST_TEST(q12.min_elem() == 'a');
|
||||
BOOST_TEST(q345.min_prio() == 2);
|
||||
BOOST_TEST(q345.min_elem() == 'c');
|
||||
|
||||
q12.split();
|
||||
BOOST_TEST(n1.find() == std::string("A"));
|
||||
BOOST_TEST(n2.find() == std::string("B"));
|
||||
|
||||
q345.split();
|
||||
BOOST_TEST(n3.find() == std::string("C"));
|
||||
BOOST_TEST(n4.find() == std::string("D"));
|
||||
BOOST_TEST(n5.find() == std::string("E"));
|
||||
BOOST_TEST(q3.min_prio() == 2);
|
||||
BOOST_TEST(q3.min_elem() == 'c');
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_medium)
|
||||
{
|
||||
using Queue = mwmatching::ConcatenableQueue<int, char, char>;
|
||||
|
||||
std::vector<char> queue_names(19);
|
||||
for (int i = 0; i < 14; i++) {
|
||||
queue_names[i] = 'A' + i;
|
||||
}
|
||||
queue_names[14] = 'P';
|
||||
queue_names[15] = 'Q';
|
||||
queue_names[16] = 'R';
|
||||
queue_names[17] = 'S';
|
||||
queue_names[18] = 'Z';
|
||||
|
||||
std::vector<Queue> queues(queue_names.begin(), queue_names.end());
|
||||
Queue::Node nodes[14];
|
||||
|
||||
int prios[14] = {3, 8, 6, 2, 9, 4, 6, 8, 1, 5, 9, 4, 7, 8};
|
||||
|
||||
auto min_prio = [&prios](int begin, int end) -> int {
|
||||
int p = begin;
|
||||
int m = prios[p];
|
||||
++p;
|
||||
while (p != end) {
|
||||
m = std::min(m, prios[p]);
|
||||
++p;
|
||||
}
|
||||
return m;
|
||||
};
|
||||
|
||||
for (int i = 0; i < 14; i++) {
|
||||
BOOST_TEST(queues[i].name() == 'A' + i);
|
||||
queues[i].insert(&nodes[i], prios[i], 'a' + i);
|
||||
}
|
||||
|
||||
auto m14 = {&queues[0], &queues[1]};
|
||||
queues[14].merge(m14.begin(), m14.end());
|
||||
BOOST_TEST(queues[14].name() == 'P');
|
||||
BOOST_TEST(queues[14].min_prio() == min_prio(0, 2));
|
||||
|
||||
auto m15 = {&queues[2], &queues[3], &queues[4]};
|
||||
queues[15].merge(m15.begin(), m15.end());
|
||||
BOOST_TEST(queues[15].name() == 'Q');
|
||||
BOOST_TEST(queues[15].min_prio() == min_prio(2, 5));
|
||||
|
||||
auto m16 = {&queues[5], &queues[6], &queues[7], &queues[8]};
|
||||
queues[16].merge(m16.begin(), m16.end());
|
||||
BOOST_TEST(queues[16].name() == 'R');
|
||||
BOOST_TEST(queues[16].min_prio() == min_prio(5, 9));
|
||||
|
||||
auto m17 = {&queues[9], &queues[10], &queues[11], &queues[12],
|
||||
&queues[13]};
|
||||
queues[17].merge(m17.begin(), m17.end());
|
||||
BOOST_TEST(queues[17].name() == 'S');
|
||||
BOOST_TEST(queues[17].min_prio() == min_prio(9, 14));
|
||||
|
||||
for (int i = 0; i < 2; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'P');
|
||||
}
|
||||
for (int i = 2; i < 5; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'Q');
|
||||
}
|
||||
for (int i = 5; i < 9; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'R');
|
||||
}
|
||||
for (int i = 9; i < 14; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'S');
|
||||
}
|
||||
|
||||
auto m18 = {&queues[14], &queues[15], &queues[16], &queues[17]};
|
||||
queues[18].merge(m18.begin(), m18.end());
|
||||
BOOST_TEST(queues[18].name() == 'Z');
|
||||
BOOST_TEST(queues[18].min_prio() == 1);
|
||||
BOOST_TEST(queues[18].min_elem() == 'i');
|
||||
|
||||
for (int i = 0; i < 14; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'Z');
|
||||
}
|
||||
|
||||
prios[8] = 5;
|
||||
nodes[8].set_prio(prios[8]);
|
||||
BOOST_TEST(queues[18].min_prio() == 2);
|
||||
BOOST_TEST(queues[18].min_elem() == 'd');
|
||||
|
||||
queues[18].split();
|
||||
|
||||
for (int i = 0; i < 2; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'P');
|
||||
}
|
||||
for (int i = 2; i < 5; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'Q');
|
||||
}
|
||||
for (int i = 5; i < 9; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'R');
|
||||
}
|
||||
for (int i = 9; i < 14; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'S');
|
||||
}
|
||||
|
||||
BOOST_TEST(queues[14].min_prio() == min_prio(0, 2));
|
||||
BOOST_TEST(queues[15].min_prio() == min_prio(2, 5));
|
||||
BOOST_TEST(queues[16].min_prio() == min_prio(5, 9));
|
||||
BOOST_TEST(queues[17].min_prio() == min_prio(9, 14));
|
||||
|
||||
for (int i = 14; i < 18; i++) {
|
||||
queues[i].split();
|
||||
}
|
||||
|
||||
for (int i = 0; i < 14; i++) {
|
||||
BOOST_TEST(nodes[i].find() == 'A' + i);
|
||||
BOOST_TEST(queues[i].min_prio() == prios[i]);
|
||||
BOOST_TEST(queues[i].min_elem() == 'a' + i);
|
||||
}
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_random)
|
||||
{
|
||||
using Queue = mwmatching::ConcatenableQueue<double, int, int>;
|
||||
|
||||
constexpr int NUM_NODES = 4000;
|
||||
|
||||
std::mt19937 rng(23456);
|
||||
std::uniform_real_distribution<> prio_distribution;
|
||||
std::uniform_int_distribution<> node_distribution(0, NUM_NODES - 1);
|
||||
|
||||
double prios[NUM_NODES];
|
||||
Queue::Node nodes[NUM_NODES];
|
||||
std::unordered_map<int, std::unique_ptr<Queue>> queues;
|
||||
std::unordered_map<int, std::set<int>> queue_nodes;
|
||||
std::unordered_map<int, std::set<int>> queue_subs;
|
||||
std::set<int> live_queues;
|
||||
std::set<int> live_merged_queues;
|
||||
|
||||
// Make trivial queues.
|
||||
for (int i = 0; i < NUM_NODES; i++) {
|
||||
int name = 10000 + i;
|
||||
prios[i] = prio_distribution(rng);
|
||||
queues[name] = std::unique_ptr<Queue>(new Queue(name));
|
||||
queues[name]->insert(&nodes[i], prios[i], i);
|
||||
queue_nodes[name].insert(i);
|
||||
live_queues.insert(name);
|
||||
}
|
||||
|
||||
// Run modifications.
|
||||
for (int i = 0; i < 4000; i++) {
|
||||
|
||||
// Find top-level queue of few nodes.
|
||||
for (int k = 0; k < 200; k++) {
|
||||
int t = node_distribution(rng);
|
||||
int name = nodes[t].find();
|
||||
BOOST_TEST(queue_nodes[name].count(t) == 1);
|
||||
}
|
||||
|
||||
// Change priority of a few nodes.
|
||||
for (int k = 0; k < 10; k++) {
|
||||
int t = node_distribution(rng);
|
||||
int name = nodes[t].find();
|
||||
BOOST_TEST(live_queues.count(name) == 1);
|
||||
BOOST_TEST(queue_nodes[name].count(t) == 1);
|
||||
BOOST_TEST(nodes[t].prio() == prios[t]);
|
||||
double p = prio_distribution(rng);
|
||||
prios[t] = p;
|
||||
nodes[t].set_prio(p);
|
||||
for (int tt : queue_nodes[name]) {
|
||||
if (prios[tt] < p) {
|
||||
t = tt;
|
||||
p = prios[tt];
|
||||
}
|
||||
}
|
||||
BOOST_TEST(queues[name]->min_prio() == p);
|
||||
BOOST_TEST(queues[name]->min_elem() == t);
|
||||
}
|
||||
|
||||
if (live_queues.size() > 100) {
|
||||
|
||||
// Choose number of queues to merge between 2 and 100.
|
||||
int k = std::uniform_int_distribution<>(2, 100)(rng);
|
||||
k = std::uniform_int_distribution<>(2, k)(rng);
|
||||
|
||||
// Choose queues to merge.
|
||||
std::vector<int> live_queue_vec(live_queues.begin(),
|
||||
live_queues.end());
|
||||
std::vector<int> sub_names;
|
||||
std::vector<Queue*> sub_queues;
|
||||
for (int ki = 0; ki < k; ki++) {
|
||||
int t = std::uniform_int_distribution<>(
|
||||
0, live_queue_vec.size() - 1)(rng);
|
||||
int name = live_queue_vec[t];
|
||||
sub_names.push_back(name);
|
||||
sub_queues.push_back(queues[name].get());
|
||||
live_queue_vec[t] = live_queue_vec.back();
|
||||
live_queue_vec.pop_back();
|
||||
live_queues.erase(name);
|
||||
live_merged_queues.erase(name);
|
||||
}
|
||||
|
||||
// Create new queue by merging selected queues.
|
||||
int name = 20000 + i;
|
||||
queues[name] = std::unique_ptr<Queue>(new Queue(name));
|
||||
queues[name]->merge(sub_queues.begin(), sub_queues.end());
|
||||
for (int nn : sub_names) {
|
||||
queue_nodes[name].insert(queue_nodes[nn].begin(),
|
||||
queue_nodes[nn].end());
|
||||
}
|
||||
queue_subs[name].insert(sub_names.begin(), sub_names.end());
|
||||
live_queues.insert(name);
|
||||
live_merged_queues.insert(name);
|
||||
|
||||
// Check new queue.
|
||||
{
|
||||
double p = 2;
|
||||
int t = 0;
|
||||
for (int tt : queue_nodes[name]) {
|
||||
if (prios[tt] < p) {
|
||||
t = tt;
|
||||
p = prios[tt];
|
||||
}
|
||||
}
|
||||
BOOST_TEST(queues[name]->min_prio() == p);
|
||||
BOOST_TEST(queues[name]->min_elem() == t);
|
||||
}
|
||||
}
|
||||
|
||||
if ((live_queues.size() <= 100)
|
||||
|| (live_merged_queues.size() >= 100)) {
|
||||
|
||||
// Choose a random queue to split.
|
||||
std::vector<int> live_queue_vec(live_merged_queues.begin(),
|
||||
live_merged_queues.end());
|
||||
int k = std::uniform_int_distribution<>(
|
||||
0, live_queue_vec.size() - 1)(rng);
|
||||
int name = live_queue_vec[k];
|
||||
|
||||
queues[name]->split();
|
||||
|
||||
for (int nn : queue_subs[name]) {
|
||||
// Check reconstructed sub-queue.
|
||||
double p = 2;
|
||||
int t = 0;
|
||||
for (int tt : queue_nodes[nn]) {
|
||||
if (prios[tt] < p) {
|
||||
t = tt;
|
||||
p = prios[tt];
|
||||
}
|
||||
}
|
||||
BOOST_TEST(queues[nn]->min_prio() == p);
|
||||
BOOST_TEST(queues[nn]->min_elem() == t);
|
||||
|
||||
// Mark sub-queue as live.
|
||||
live_queues.insert(nn);
|
||||
if (queue_subs.count(nn) == 1) {
|
||||
live_merged_queues.insert(nn);
|
||||
}
|
||||
}
|
||||
|
||||
live_merged_queues.erase(name);
|
||||
live_queues.erase(name);
|
||||
|
||||
queues.erase(name);
|
||||
queue_nodes.erase(name);
|
||||
queue_subs.erase(name);
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_SUITE_END()
|
|
@ -13,7 +13,7 @@
|
|||
#define BOOST_TEST_MODULE mwmatching
|
||||
#include <boost/test/unit_test.hpp>
|
||||
|
||||
#include "mwmatching.h"
|
||||
#include "mwmatching.hpp"
|
||||
|
||||
|
||||
using EdgeVectorLong = std::vector<mwmatching::Edge<long>>;
|
||||
|
@ -327,19 +327,40 @@ BOOST_AUTO_TEST_CASE(test51_augment_blossom_nested)
|
|||
BOOST_TEST(mwmatching::maximum_weight_matching(edges) == expect);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test61_unsigned)
|
||||
BOOST_AUTO_TEST_CASE(test52_augment_blossom_nested2)
|
||||
{
|
||||
std::vector<mwmatching::Edge<unsigned int>> edges = {{1, 2, 5}, {2, 3, 11}, {3, 4, 5}};
|
||||
Matching expect = {{2, 3}};
|
||||
/*
|
||||
*
|
||||
* [4]--15 19--[2]
|
||||
* | \ / |
|
||||
* 16 [1] 21
|
||||
* | / \ |
|
||||
* [5]--17 19--[3]
|
||||
* |
|
||||
* 10
|
||||
* |
|
||||
* [6]
|
||||
*/
|
||||
EdgeVectorLong edges = {
|
||||
{1, 2, 19}, {1, 3, 19}, {1, 4, 15}, {1, 5, 17}, {2, 3, 21}, {4, 5, 16}, {5, 6, 10}};
|
||||
Matching expect = {{1, 4}, {2, 3}, {5, 6}};
|
||||
BOOST_TEST(mwmatching::maximum_weight_matching(edges) == expect);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test62_unsigned2)
|
||||
BOOST_AUTO_TEST_CASE(test61_triangles_n9)
|
||||
{
|
||||
std::vector<mwmatching::Edge<unsigned int>> edges = {
|
||||
{1, 2, 8}, {1, 3, 9}, {2, 3, 10}, {3, 4, 7}, {1, 6, 5}, {4, 5, 6}};
|
||||
Matching expect = {{2, 3}, {1, 6}, {4, 5}};
|
||||
BOOST_TEST(mwmatching::maximum_weight_matching(edges) == expect);
|
||||
/*
|
||||
* t.f 9 nodes
|
||||
*
|
||||
* [1]------[4] [7]
|
||||
* | \ | \ | \
|
||||
* | [3] | [6] | [9]
|
||||
* | / | / | /
|
||||
* [2] [5]------[8]
|
||||
*/
|
||||
EdgeVectorLong edges = {
|
||||
{1, 2, 1}, {1, 3, 1}, {2, 3, 1}, {4, 5, 1}, {4, 6, 1}, {5, 6, 1}, {7, 8, 1}, {7, 9, 1}, {8, 9, 1}, {1, 4, 1}, {5, 8, 1}};
|
||||
BOOST_TEST(mwmatching::maximum_weight_matching(edges).size() == 4);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_fail_bad_input)
|
||||
|
@ -749,3 +770,72 @@ BOOST_AUTO_TEST_CASE(test_blossom_not_full)
|
|||
}
|
||||
|
||||
BOOST_AUTO_TEST_SUITE_END()
|
||||
|
||||
|
||||
/* ********** Test graphs that force big values for dual / slack ********** */
|
||||
|
||||
BOOST_AUTO_TEST_SUITE(test_value_range)
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_big_blossom_dual)
|
||||
{
|
||||
/*
|
||||
* Force modified blossom dual close to 2*maxweight.
|
||||
*
|
||||
* [3]
|
||||
* / \
|
||||
* W-4/ \W
|
||||
* 7 / \
|
||||
* [1]-----[2]-----[4]
|
||||
* | W-4
|
||||
* 5|
|
||||
* | 1
|
||||
* [5]-----[6]
|
||||
*/
|
||||
long w = std::numeric_limits<long>::max() / 6;
|
||||
EdgeVectorLong edges = {{1, 2, 7}, {2, 3, w - 4}, {2, 4, w - 4}, {2, 5, 5}, {3, 4, w}, {5, 6, 1}};
|
||||
Matching expect = {{1, 2}, {3, 4}, {5, 6}};
|
||||
BOOST_TEST(mwmatching::maximum_weight_matching(edges) == expect);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_negative_blossom_dual)
|
||||
{
|
||||
/*
|
||||
* Force modified blossom dual close to -maxweight.
|
||||
*
|
||||
* [3]
|
||||
* / \
|
||||
* 5/ \7
|
||||
* 1 / \
|
||||
* [1]-----[2]-----[4]
|
||||
* | 5
|
||||
* 1|
|
||||
* | W
|
||||
* [5]-----[6]
|
||||
*/
|
||||
long w = std::numeric_limits<long>::max() / 6;
|
||||
EdgeVectorLong edges = {{1, 2, 1}, {2, 3, 5}, {2, 4, 5}, {2, 5, 1}, {3, 4, 7}, {5, 6, w}};
|
||||
Matching expect = {{1, 2}, {3, 4}, {5, 6}};
|
||||
BOOST_TEST(mwmatching::maximum_weight_matching(edges) == expect);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_big_edge_slack)
|
||||
{
|
||||
/*
|
||||
* Force modified edge slack close to 3*maxweight.
|
||||
*
|
||||
* 6 W W-2 5
|
||||
* [1]-----[2]-----[3]-----[4]-----[5]
|
||||
* | |
|
||||
* |1 |1
|
||||
* | |
|
||||
* [6]-----[7]-----[8]-----[9]-----[10]
|
||||
* 6 W W-2 5
|
||||
*/
|
||||
long w = std::numeric_limits<long>::max() / 6;
|
||||
EdgeVectorLong edges = {{1, 2, 6}, {1, 6, 1}, {2, 3, w}, {3, 4, w - 2}, {3, 8, 1}, {4, 5, 5},
|
||||
{6, 7, 6}, {7, 8, w}, {8, 9, w - 2}, {9, 10, 5}};
|
||||
Matching expect = {{1, 6}, {2, 3}, {4, 5}, {7, 8}, {9, 10}};
|
||||
BOOST_TEST(mwmatching::maximum_weight_matching(edges) == expect);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_SUITE_END()
|
||||
|
|
|
@ -0,0 +1,252 @@
|
|||
/*
|
||||
* Unit tests for PriorityQueue data structure.
|
||||
*
|
||||
* Depends on the Boost.Test unit test framework.
|
||||
* Tested with Boost v1.74, available from https://www.boost.org/
|
||||
*/
|
||||
|
||||
#include <algorithm>
|
||||
#include <climits>
|
||||
#include <memory>
|
||||
#include <random>
|
||||
#include <string>
|
||||
#include <tuple>
|
||||
#include <vector>
|
||||
|
||||
#define BOOST_TEST_MODULE datastruct
|
||||
#include <boost/test/unit_test.hpp>
|
||||
|
||||
#include "priority_queue.hpp"
|
||||
|
||||
|
||||
BOOST_AUTO_TEST_SUITE(test_priority_queue)
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_empty)
|
||||
{
|
||||
using Queue = mwmatching::PriorityQueue<int, std::string>;
|
||||
Queue q;
|
||||
BOOST_TEST(q.empty() == true);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_single)
|
||||
{
|
||||
using Queue = mwmatching::PriorityQueue<int, std::string>;
|
||||
Queue q;
|
||||
|
||||
Queue::Node n1;
|
||||
BOOST_TEST(n1.valid() == false);
|
||||
|
||||
q.insert(&n1, 5, "a");
|
||||
|
||||
BOOST_TEST(n1.valid() == true);
|
||||
BOOST_TEST(n1.prio() == 5);
|
||||
BOOST_TEST(q.empty() == false);
|
||||
BOOST_TEST(q.min_prio() == 5);
|
||||
BOOST_TEST(q.min_elem() == std::string("a"));
|
||||
|
||||
q.set_prio(&n1, 3);
|
||||
BOOST_TEST(n1.prio() == 3);
|
||||
BOOST_TEST(q.min_prio() == 3);
|
||||
BOOST_TEST(q.min_elem() == std::string("a"));
|
||||
|
||||
q.remove(&n1);
|
||||
BOOST_TEST(n1.valid() == false);
|
||||
BOOST_TEST(q.empty() == true);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_simple)
|
||||
{
|
||||
using Queue = mwmatching::PriorityQueue<int, char>;
|
||||
Queue q;
|
||||
Queue::Node nodes[10];
|
||||
|
||||
q.insert(&nodes[0], 9, 'a');
|
||||
BOOST_TEST(q.min_prio() == 9);
|
||||
BOOST_TEST(q.min_elem() == 'a');
|
||||
|
||||
q.insert(&nodes[1], 4, 'b');
|
||||
BOOST_TEST(q.min_prio() == 4);
|
||||
BOOST_TEST(q.min_elem() == 'b');
|
||||
|
||||
q.insert(&nodes[2], 7, 'c');
|
||||
BOOST_TEST(q.min_prio() == 4);
|
||||
BOOST_TEST(q.min_elem() == 'b');
|
||||
|
||||
q.insert(&nodes[3], 5, 'd');
|
||||
BOOST_TEST(q.min_prio() == 4);
|
||||
BOOST_TEST(q.min_elem() == 'b');
|
||||
|
||||
q.insert(&nodes[4], 8, 'e');
|
||||
BOOST_TEST(q.min_prio() == 4);
|
||||
BOOST_TEST(q.min_elem() == 'b');
|
||||
|
||||
q.insert(&nodes[5], 6, 'f');
|
||||
BOOST_TEST(q.min_prio() == 4);
|
||||
BOOST_TEST(q.min_elem() == 'b');
|
||||
|
||||
q.insert(&nodes[6], 4, 'g');
|
||||
q.insert(&nodes[7], 5, 'h');
|
||||
q.insert(&nodes[8], 2, 'i');
|
||||
BOOST_TEST(q.min_prio() == 2);
|
||||
BOOST_TEST(q.min_elem() == 'i');
|
||||
|
||||
q.insert(&nodes[9], 6, 'j');
|
||||
BOOST_TEST(q.min_prio() == 2);
|
||||
BOOST_TEST(q.min_elem() == 'i');
|
||||
|
||||
q.set_prio(&nodes[2], 1);
|
||||
BOOST_TEST(q.min_prio() == 1);
|
||||
BOOST_TEST(q.min_elem() == 'c');
|
||||
|
||||
q.set_prio(&nodes[4], 3);
|
||||
BOOST_TEST(q.min_prio() == 1);
|
||||
BOOST_TEST(q.min_elem() == 'c');
|
||||
|
||||
q.remove(&nodes[2]);
|
||||
BOOST_TEST(q.min_prio() == 2);
|
||||
BOOST_TEST(q.min_elem() == 'i');
|
||||
|
||||
q.remove(&nodes[8]);
|
||||
BOOST_TEST(q.min_prio() == 3);
|
||||
BOOST_TEST(q.min_elem() == 'e');
|
||||
|
||||
q.remove(&nodes[4]);
|
||||
q.remove(&nodes[1]);
|
||||
BOOST_TEST(q.min_prio() == 4);
|
||||
BOOST_TEST(q.min_elem() == 'g');
|
||||
|
||||
q.remove(&nodes[3]);
|
||||
q.remove(&nodes[9]);
|
||||
BOOST_TEST(q.min_prio() == 4);
|
||||
BOOST_TEST(q.min_elem() == 'g');
|
||||
|
||||
q.remove(&nodes[6]);
|
||||
BOOST_TEST(q.min_prio() == 5);
|
||||
BOOST_TEST(q.min_elem() == 'h');
|
||||
|
||||
BOOST_TEST(q.empty() == false);
|
||||
q.clear();
|
||||
BOOST_TEST(q.empty() == true);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_increase_prio)
|
||||
{
|
||||
using Queue = mwmatching::PriorityQueue<int, char>;
|
||||
|
||||
Queue q;
|
||||
Queue::Node n1;
|
||||
|
||||
q.insert(&n1, 5, 'a');
|
||||
q.set_prio(&n1, 8);
|
||||
BOOST_TEST(n1.prio() == 8);
|
||||
BOOST_TEST(q.min_prio() == 8);
|
||||
|
||||
q.clear();
|
||||
|
||||
Queue::Node n2, n3, n4;
|
||||
|
||||
q.insert(&n1, 9, 'A');
|
||||
q.insert(&n2, 4, 'b');
|
||||
q.insert(&n3, 7, 'c');
|
||||
q.insert(&n4, 5, 'd');
|
||||
BOOST_TEST(q.min_prio() == 4);
|
||||
BOOST_TEST(q.min_elem() == 'b');
|
||||
|
||||
q.set_prio(&n2, 8);
|
||||
BOOST_TEST(n2.prio() == 8);
|
||||
BOOST_TEST(q.min_elem() == 'd');
|
||||
BOOST_TEST(q.min_prio() == 5);
|
||||
|
||||
q.set_prio(&n3, 10);
|
||||
BOOST_TEST(n3.prio() == 10);
|
||||
BOOST_TEST(q.min_elem() == 'd');
|
||||
|
||||
q.remove(&n4);
|
||||
BOOST_TEST(q.min_elem() == 'b');
|
||||
|
||||
q.remove(&n2);
|
||||
BOOST_TEST(q.min_prio() == 9);
|
||||
BOOST_TEST(q.min_elem() == 'A');
|
||||
|
||||
q.remove(&n1);
|
||||
BOOST_TEST(q.min_elem() == 'c');
|
||||
BOOST_TEST(q.min_prio() == 10);
|
||||
|
||||
q.remove(&n3);
|
||||
BOOST_TEST(q.empty() == true);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(test_random)
|
||||
{
|
||||
using Queue = mwmatching::PriorityQueue<int, int>;
|
||||
Queue q;
|
||||
|
||||
const int num_elem = 1000;
|
||||
std::vector<std::tuple<std::unique_ptr<Queue::Node>, int, int>> elems;
|
||||
int next_data = 0;
|
||||
|
||||
std::mt19937 rng(34567);
|
||||
|
||||
auto check = [&q,&elems]() {
|
||||
int min_prio = q.min_prio();
|
||||
int min_data = q.min_elem();
|
||||
int best_prio = INT_MAX;
|
||||
bool found = false;
|
||||
for (const auto& v : elems) {
|
||||
int this_prio = std::get<1>(v);
|
||||
int this_data = std::get<2>(v);
|
||||
best_prio = std::min(best_prio, this_prio);
|
||||
if ((this_prio == min_prio) && (this_data == min_data)) {
|
||||
found = true;
|
||||
}
|
||||
}
|
||||
BOOST_TEST(found == true);
|
||||
BOOST_TEST(min_prio == best_prio);
|
||||
};
|
||||
|
||||
for (int i = 0; i < num_elem; ++i) {
|
||||
++next_data;
|
||||
int prio = std::uniform_int_distribution<>(0, 1000000)(rng);
|
||||
std::unique_ptr<Queue::Node> nptr(new Queue::Node);
|
||||
q.insert(nptr.get(), prio, next_data);
|
||||
elems.push_back(std::make_tuple(std::move(nptr), prio, next_data));
|
||||
check();
|
||||
}
|
||||
|
||||
for (int i = 0; i < 10000; ++i) {
|
||||
int p = std::uniform_int_distribution<>(0, num_elem - 1)(rng);
|
||||
Queue::Node* node = std::get<0>(elems[p]).get();
|
||||
int prio = std::get<1>(elems[p]);
|
||||
prio = std::uniform_int_distribution<>(0, 1000000)(rng);
|
||||
q.set_prio(node, prio);
|
||||
std::get<1>(elems[p]) = prio;
|
||||
check();
|
||||
|
||||
p = std::uniform_int_distribution<>(0, num_elem - 1)(rng);
|
||||
node = std::get<0>(elems[p]).get();
|
||||
q.remove(node);
|
||||
elems.erase(elems.begin() + p);
|
||||
check();
|
||||
|
||||
++next_data;
|
||||
prio = std::uniform_int_distribution<>(0, 1000000)(rng);
|
||||
std::unique_ptr<Queue::Node> nptr(new Queue::Node);
|
||||
q.insert(nptr.get(), prio, next_data);
|
||||
elems.push_back(std::make_tuple(std::move(nptr), prio, next_data));
|
||||
check();
|
||||
}
|
||||
|
||||
for (int i = 0; i < num_elem; ++i) {
|
||||
int p = std::uniform_int_distribution<>(0, num_elem - 1 - i)(rng);
|
||||
Queue::Node* node = std::get<0>(elems[p]).get();
|
||||
q.remove(node);
|
||||
elems.erase(elems.begin() + p);
|
||||
if (! elems.empty()) {
|
||||
check();
|
||||
}
|
||||
}
|
||||
|
||||
BOOST_TEST(q.empty() == true);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_SUITE_END()
|
747
doc/Algorithm.md
747
doc/Algorithm.md
|
@ -4,19 +4,20 @@
|
|||
## Introduction
|
||||
|
||||
This document describes the implementation of an algorithm that computes a maximum weight matching
|
||||
in a general graph in time _O(n<sup>3</sup>)_, where _n_ is the number of vertices in the graph.
|
||||
in a general graph in time _O(n (n + m) log n)_, where _n_ is the number of vertices in
|
||||
the graph and _m_ is the number of edges.
|
||||
|
||||
In graph theory, a _matching_ is a subset of edges such that none of them share a common vertex.
|
||||
|
||||
A _maximum cardinality matching_ is a matching that contains the largest possible number of edges
|
||||
(or equivalently, the largest possible number of vertices).
|
||||
|
||||
For a graph that has weights attached to its edges, a _maximum weight matching_
|
||||
If a graph has weights assigned to its edges, a _maximum weight matching_
|
||||
is a matching that achieves the largest possible sum of weights of matched edges.
|
||||
An algorithm for maximum weight matching can obviously also be used to calculate a maximum
|
||||
cardinality matching by simply assigning weight 1 to all edges.
|
||||
|
||||
Certain computer science problems can be understood as _restrictions_ of maximum weighted matching
|
||||
Certain related problems can be understood as _restrictions_ of maximum weighted matching
|
||||
in general graphs.
|
||||
Examples are: maximum matching in bipartite graphs, maximum cardinality matching in general graphs,
|
||||
and maximum weighted matching in general graphs with edge weights limited to integers
|
||||
|
@ -48,7 +49,7 @@ It is based on the ideas of Edmonds, but uses different data structures to reduc
|
|||
of work.
|
||||
|
||||
In 1983, Galil, Micali and Gabow published a maximum weighted matching algorithm that runs in
|
||||
time _O(n\*m\*log(n))_ [[4]](#galil_micali_gabow1986) .
|
||||
time _O(n m log n)_ [[4]](#galil_micali_gabow1986) .
|
||||
It is an implementation of Edmonds' blossom algorithm that uses advanced data structures
|
||||
to speed up critical parts of the algorithm.
|
||||
This algorithm is asymptotically faster than _O(n<sup>3</sup>)_ for sparse graphs,
|
||||
|
@ -63,45 +64,32 @@ of the literature.
|
|||
The paper describes a maximum weighted matching algorithm that is similar to Edmonds'
|
||||
blossom algorithm, but carefully implemented to run in time _O(n<sup>3</sup>)_.
|
||||
It then sketches how advanced data structures can be added to arrive at the Galil-Micali-Gabow
|
||||
algorithm that runs in time _O(n\*m\*log(n))_.
|
||||
algorithm that runs in time _O(n m log n)_.
|
||||
|
||||
In 1990, Gabow published a maximum weighted matching algorithm that runs in time
|
||||
_O(n\*m + n<sup>2</sup>\*log(n))_ [[6]](#gabow1990) .
|
||||
_O(n m + n<sup>2</sup> log n)_ [[6]](#gabow1990) .
|
||||
It uses several advanced data structures, including Fibonacci heaps.
|
||||
Unfortunately I don't understand this algorithm at all.
|
||||
|
||||
|
||||
## Choosing an algorithm
|
||||
|
||||
I selected the _O(n<sup>3</sup>)_ variant of Edmonds' blossom algorithm as described by
|
||||
Galil [[5]](#galil1986) .
|
||||
This algorithm is usually credited to Gabow [[3]](#gabow1974), but I find the description
|
||||
in [[5]](#galil1986) easier to understand.
|
||||
I selected the _O(n m log n)_ algorithm by Galil, Micali and Gabow
|
||||
[[4]](#galil_micali_gabow1986).
|
||||
|
||||
This is generally a fine algorithm.
|
||||
One of its strengths is that it is relatively easy to understand, especially compared
|
||||
to the more recent algorithms.
|
||||
Its run time is asymptotically optimal for complete graphs (graphs that have an edge
|
||||
between every pair of vertices).
|
||||
|
||||
On the other hand, the algorithm is suboptimal for sparse graphs.
|
||||
It is possible to construct highly sparse graphs, having _m = O(n)_,
|
||||
that cause this algorithm to perform _Θ(n<sup>3</sup>)_ steps.
|
||||
In such cases the Galil-Micali-Gabow algorithm would be significantly faster.
|
||||
|
||||
Then again, there is a question of how important the worst case is for practical applications.
|
||||
I suspect that the simple algorithm typically needs only about _O(n\*m)_ steps when running
|
||||
on random sparse graphs with random weights, i.e. much faster than its worst case bound.
|
||||
|
||||
After trading off these properties, I decided that I prefer an algorithm that is understandable
|
||||
and has decent performance, over an algorithm that is faster in specific cases but also
|
||||
significantly more complicated.
|
||||
This algorithm is asymptotically optimal for sparse graphs.
|
||||
It has also been shown to be quite fast in practice on several types of graphs
|
||||
including random graphs [[7]](#mehlhorn_schafer2002).
|
||||
|
||||
This algorithm is more difficult to implement than the older _O(n<sup>3</sup>)_ algorithm.
|
||||
In particular, it requires a specialized data structure to implement concatenable priority queues.
|
||||
This increases the size and complexity of the code quite a bit.
|
||||
However, in my opinion the performance improvement is worth the extra effort.
|
||||
|
||||
## Description of the algorithm
|
||||
|
||||
My implementation closely follows the description by Zvi Galil in [[5]](#galil1986) .
|
||||
I recommend to read that paper before diving into my description below.
|
||||
My implementation roughly follows the description by Zvi Galil in [[5]](#galil1986) .
|
||||
I recommend reading that paper before diving into my description below.
|
||||
The paper explains the algorithm in depth and shows how it relates to matching
|
||||
in bipartite graphs and non-weighted graphs.
|
||||
|
||||
|
@ -253,14 +241,14 @@ have _slack_.
|
|||
An augmenting path that consists only of tight edges is _guaranteed_ to increase the weight
|
||||
of the matching as much as possible.
|
||||
|
||||
While searching for an augmenting path, we simply restrict the search to tight edges,
|
||||
While searching for an augmenting path, we restrict the search to tight edges,
|
||||
ignoring all edges that have slack.
|
||||
Certain explicit actions of the algorithm cause edges to become tight or slack.
|
||||
How this works will be explained later.
|
||||
|
||||
To find an augmenting path, the algorithm searches for alternating paths that start
|
||||
in an unmatched vertex.
|
||||
The collection of alternating paths forms a forest of trees.
|
||||
The collection of such alternating paths forms a forest of trees.
|
||||
Each tree is rooted in an unmatched vertex, and all paths from the root to the leaves of a tree
|
||||
are alternating paths.
|
||||
The nodes in these trees are top-level blossoms.
|
||||
|
@ -289,8 +277,8 @@ an odd-length alternating cycle.
|
|||
The lowest common ancestor node in the alternating tree forms the beginning and end
|
||||
of the alternating cycle.
|
||||
In this case a new blossom must be created by shrinking the cycle.
|
||||
If the two S-blossoms are in different alternating trees, the edge that links the blossoms
|
||||
is part of an augmenting path between the roots of the two trees.
|
||||
On the other hand, if the two S-blossoms are in different alternating trees,
|
||||
the edge that links the blossoms is part of an augmenting path between the roots of the two trees.
|
||||
|
||||
![Figure 3](figures/graph3.png) <br>
|
||||
*Figure 3: Growing alternating trees*
|
||||
|
@ -314,7 +302,7 @@ The search procedure considers these vertices one-by-one and tries to use them t
|
|||
either grow the alternating tree (thus adding new vertices to the queue),
|
||||
or discover an augmenting path or a new blossom.
|
||||
|
||||
In detail, the search for an augmenting path proceeds as follows:
|
||||
The search for an augmenting path proceeds as follows:
|
||||
|
||||
- Mark all top-level blossoms as _unlabeled_.
|
||||
- Initialize an empty queue _Q_.
|
||||
|
@ -469,9 +457,9 @@ $$ \pi_{x,y} = u_x + u_y + \sum_{(x,y) \in B} z_B - w_{x,y} $$
|
|||
|
||||
An edge is _tight_ if and only if its slack is zero.
|
||||
Given the values of the dual variables, it is very easy to calculate the slack of an edge
|
||||
which is not contained in any blossom: simply add the duals of its incident vertices and
|
||||
which is not contained in any blossom: add the duals of its incident vertices and
|
||||
subtract the weight.
|
||||
To check whether an edge is tight, simply compute its slack and check whether it is zero.
|
||||
To check whether an edge is tight, simply compute its slack and compare it to zero.
|
||||
|
||||
Calculating the slack of an edge that is contained in one or more blossoms is a little tricky,
|
||||
but fortunately we don't need such calculations.
|
||||
|
@ -504,7 +492,7 @@ At that point the maximum weight matching has been found.
|
|||
When the matching algorithm is finished, the constraints can be checked to verify
|
||||
that the matching is optimal.
|
||||
This check is simpler and faster than the matching algorithm itself.
|
||||
It can therefore be a useful way to guard against bugs in the matching algorithm.
|
||||
It can therefore be a useful way to guard against bugs in the algorithm.
|
||||
|
||||
### Rules for updating dual variables
|
||||
|
||||
|
@ -534,11 +522,12 @@ It then changes dual variables as follows:
|
|||
- _z<sub>B</sub> ← z<sub>B</sub> − 2 * δ_ for every non-trivial T-blossom _B_
|
||||
|
||||
Dual variables of unlabeled blossoms and their vertices remain unchanged.
|
||||
Dual variables _z<sub>B</sub>_ of non-trivial sub-blossoms also remain changed;
|
||||
Dual variables _z<sub>B</sub>_ of non-trivial sub-blossoms also remain unchanged;
|
||||
only top-level blossoms have their _z<sub>B</sub>_ updated.
|
||||
|
||||
Note that this update does not change the slack of edges that are either matched,
|
||||
or linked in the alternating tree, or contained in a blossom.
|
||||
Note that these rules ensure that no change occurs to the slack of any edge which is matched,
|
||||
or part of an alternating tree, or contained in a blossom.
|
||||
Such edges are tight and remain tight through the update.
|
||||
However, the update reduces the slack of edges between S blossoms and edges between S-blossoms
|
||||
and unlabeled blossoms.
|
||||
It may cause some of these edges to become tight, allowing them to be used
|
||||
|
@ -569,25 +558,62 @@ If the dual update is limited by _δ<sub>4</sub>_, it causes the dual varia
|
|||
a T-blossom to become zero.
|
||||
The next step is to expand that blossom.
|
||||
|
||||
A dual update may find that _δ = 0_, implying that the dual variables don't change.
|
||||
### Discovering tight edges through delta steps
|
||||
|
||||
A delta step may find that _δ = 0_, implying that the dual variables don't change.
|
||||
This can still be useful since all types of updates have side effects (adding an edge
|
||||
to an alternating tree, or expanding a blossom) that allow the algorithm to make progress.
|
||||
In fact, it is convenient to let the dual update mechanism drive the entire process of discovering
|
||||
tight edges and growing alternating trees.
|
||||
|
||||
During a single _stage_, the algorithm may iterate several times between scanning tight edges and
|
||||
updating dual variables.
|
||||
These iterations are called _substages_.
|
||||
To clarify: A stage is the process of growing alternating trees until an augmenting path is found.
|
||||
In my description of the search algorithm above, I stated that upon discovery of a tight edge
|
||||
between a newly labeled S-vertex and an unlabeled vertex or a different S-blossom, the edge should
|
||||
be used to grow the alternating tree or to create a new blossom or to form an augmenting path.
|
||||
However, it turns out to be easier to postpone the use of such edges until the next delta step.
|
||||
While scanning newly labeled S-vertices, edges to unlabeled vertices or different S-blossoms
|
||||
are discovered but not yet used.
|
||||
Such edges are merely registered in a suitable data structure.
|
||||
Even if the edge is tight, it is registered rather than used right away.
|
||||
|
||||
Once the scan completes, a delta step will be done.
|
||||
If any tight edges were discovered during the scan, the delta step will find that either
|
||||
_δ<sub>2</sub> = 0_ or _δ<sub>3</sub> = 0_.
|
||||
The corresponding step (growing the alternating tree, creating a blossom or augmenting
|
||||
the matching) will occur at that point.
|
||||
If no suitable tight edges exist, a real (non-zero) change of dual variables will occur.
|
||||
|
||||
The search for an augmenting path becomes as follows:
|
||||
|
||||
- Mark all top-level blossoms as _unlabeled_.
|
||||
- Initialize an empty queue _Q_.
|
||||
- Assign label S to all top-level blossoms that contain an unmatched vertex.
|
||||
Add all vertices inside such blossoms to _Q_.
|
||||
- Repeat until either an augmenting path is found or _δ<sub>1</sub> = 0_:
|
||||
- Scan all vertices in Q as described earlier.
|
||||
Register edges to unlabeled vertices or other S-blossoms.
|
||||
Do not yet use such edges to change the alternating tree, even if the edge is tight.
|
||||
- Calculate _δ_ and update dual variables as described above.
|
||||
- If _δ = δ<sub>1</sub>_, end the search.
|
||||
The maximum weight matching has been found.
|
||||
- If _δ = δ<sub>2</sub>_, use the corresponding edge to grow the alternating tree.
|
||||
Assign label T to the unlabeled blossom.
|
||||
Then assign label S to its mate and add the new S-vertices to _Q_.
|
||||
- If _δ = δ<sub>3</sub>_ and the corresponding edge connects two S-blossoms
|
||||
in the same alternating tree, use the edge to create a new blossom.
|
||||
Add the new S-vertices to _Q_.
|
||||
- If _δ = δ<sub>3</sub>_ and the corresponding edge connects two S-blossoms
|
||||
in different alternating trees, use the edge to construct an augmenting path.
|
||||
End the search and return the augmenting path.
|
||||
- If _δ = δ<sub>4</sub>_, expand the corresponding T-blossom.
|
||||
|
||||
It may seem complicated, but this is actually easier.
|
||||
The code that scans newly labeled S-vertices, no longer needs special treatment of tight edges.
|
||||
|
||||
In general, multiple updates of the dual variables are necessary during a single _stage_ of
|
||||
the algorithm.
|
||||
Remember that a stage is the process of growing alternating trees until an augmenting path is found.
|
||||
A stage ends either by augmenting the matching, or by concluding that no further improvement
|
||||
is possible.
|
||||
Each stage consists of one or more substages.
|
||||
A substage scans tight edges to grow the alternating trees.
|
||||
When a substage runs out of tight edges, it ends by performing a dual variable update.
|
||||
A substage also ends immediately when it finds an augmenting path.
|
||||
At the end of each stage, labels and alternating trees are removed.
|
||||
|
||||
The matching algorithm ends when a substage ends in a dual variable update limited
|
||||
by _δ<sub>1</sub>_.
|
||||
At that point the matching has maximum weight.
|
||||
|
||||
### Expanding a blossom
|
||||
|
||||
|
@ -635,161 +661,322 @@ All vertices of sub-blossoms that got label S are inserted into _Q_.
|
|||
![Figure 5](figures/graph5.png) <br>
|
||||
*Figure 5: Expanding a T-blossom*
|
||||
|
||||
### Keeping track of least-slack edges
|
||||
### Keeping track of the top-level blossom of each vertex
|
||||
|
||||
To perform a dual variable update, the algorithm needs to compute the values
|
||||
The algorithm often needs to find the top-level blossom _B(x)_ that contains a given vertex _x_.
|
||||
|
||||
A naive implementation may keep this information in an array where the element with
|
||||
index _x_ holds a pointer to blossom _B(x)_.
|
||||
Lookup in this array would be fast, but keeping the array up-to-date takes too much time.
|
||||
There can be _O(n)_ stages, and _O(n)_ blossoms can be created or expanded during a stage,
|
||||
and a blossom can contain _O(n)_ vertices,
|
||||
therefore the total number of updates to the array could add up to _O(n<sup>3</sup>)_.
|
||||
|
||||
To solve this, we use a special data structure: a _concatenable priority queue_.
|
||||
Each top-level blossom maintains an instance of this type of queue, containing its vertices.
|
||||
Each vertex is a member in precisely one such queue.
|
||||
|
||||
To find the top-level blossom _B(x)_ that contains a given vertex _x_, we determine
|
||||
the queue instance in which the vertex is a member.
|
||||
This takes time _O(log n)_.
|
||||
The queue instance corresponds directly to a specific blossom, which we can find
|
||||
for example by storing a pointer to the blossom inside the queue instance.
|
||||
|
||||
When a new blossom is created, the concatenable queues of its sub-blossoms are merged
|
||||
to form one concatenable queue for the new blossom.
|
||||
The merged queue contains all vertices of the original queues.
|
||||
Merging a pair of queues takes time _O(log n)_.
|
||||
To merge the queues of _k_ sub-blossoms, the concatenation step is repeated _k-1_ times,
|
||||
taking total time _O(k log n)_.
|
||||
|
||||
When a blossom is expanded, its concatenable queue is un-concatenated to recover separate queues
|
||||
for the sub-blossoms.
|
||||
This also takes time _O(log n)_ for each sub-blossom.
|
||||
|
||||
Implementation details of concatenable queues are discussed later in this document.
|
||||
|
||||
### Lazy updating of dual variables
|
||||
|
||||
During a delta step, the dual variables of labeled vertices and blossoms change as described above.
|
||||
Updating these variables directly would take time _O(n)_ per delta step.
|
||||
The total number of delta steps during a matching may be _Θ(n<sup>2</sup>)_,
|
||||
pushing the total time to update dual variables to _O(n<sup>3</sup>)_ which is too slow.
|
||||
|
||||
To solve this, [[4]](#galil_micali_gabow1986) describes a technique that stores dual values
|
||||
in a _modified_ form which is invariant under delta steps.
|
||||
The modified values can be converted back to the true dual values when necessary.
|
||||
[[7]](#mehlhorn_schafer2002) describes a slightly different technique which I find easier
|
||||
to understand.
|
||||
My implementation is very similar to theirs.
|
||||
|
||||
The first trick is to keep track of the running sum of _δ_ values since the beginning of the algorithm.
|
||||
Let's call that number _Δ_.
|
||||
At the start of the algorithm _Δ = 0_, but the value increases as the algorithm goes through delta steps.
|
||||
|
||||
For each non-trivial blossom, rather than storing its true dual value, we store a _modified_ dual value:
|
||||
|
||||
- For an S-blossom, the modified dual value is _z'<sub>B</sub> = z<sub>B</sub> - 2 Δ_
|
||||
- For a T-blossom, the modified dual value is _z'<sub>B</sub> = z<sub>B</sub> + 2 Δ_
|
||||
- For an unlabeled blossom or non-top-level blossom, the modified dual value is equal
|
||||
to the true dual value.
|
||||
|
||||
These modified values are invariant under delta steps.
|
||||
Thus, there is no need to update the stored values during a delta step.
|
||||
|
||||
Since the modified blossom dual value depends on the label (S or T) of the blossom,
|
||||
the modified value must be updated whenever the label of the blossom changes.
|
||||
This update can be done in constant time, and changing the label of a blossom is
|
||||
in any case an explicit step, so this won't increase the asymptotic run time.
|
||||
|
||||
For each vertex, rather than storing its true dual value, we store a _modified_ dual value:
|
||||
|
||||
- For an S-vertex, the modified dual value is _u'<sub>x</sub> = u<sub>x</sub> + Δ_
|
||||
- For a T-vertex, the modified dual value is _u'<sub>x</sub> = u<sub>x</sub> - offset<sub>B(x)</sub> - Δ_
|
||||
- For an unlabeled vertex, the modified dual value is _u'<sub>x</sub> = u<sub>x</sub> - offset<sub>B(x)</sub>_
|
||||
|
||||
where _offset<sub>B</sub>_ is an extra variable which is maintained for each top-level blossom.
|
||||
|
||||
Again, the modified values are invariant under delta steps, which implies that no update
|
||||
to the stored values is necessary during a delta step.
|
||||
|
||||
Since the modified vertex dual value depends on the label (S or T) of its top-level blossom,
|
||||
an update is necessary when that label changes.
|
||||
For S-vertices, we can afford to apply that update directly to the vertices involved.
|
||||
This is possible since a vertex becomes an S-vertex at most once per stage.
|
||||
|
||||
The situation is more complicated for T-vertices.
|
||||
During a stage, a T-vertex can become unlabeled if it is part of a T-blossom that gets expanded.
|
||||
The same vertex can again become a T-vertex, then again become unlabeled during a subsequent
|
||||
blossom expansion.
|
||||
In this way, a vertex can transition between T-vertex and unlabeled vertex up to _O(n)_ times
|
||||
within a stage.
|
||||
We can not afford to update the stored modified vertex dual so many times.
|
||||
This is where the _offset_ variables come in.
|
||||
If a blossom becomes a T-blossom, rather than updating the modified duals of all vertices,
|
||||
we update only the _offset_ variable of the blossom such that the modified vertex duals
|
||||
remain unchanged.
|
||||
If a blossom is expanded, we push the _offset_ values down to its sub-blossoms.
|
||||
|
||||
### Efficiently computing _δ_
|
||||
|
||||
To perform a delta step, the algorithm computes the values
|
||||
of _δ<sub>1</sub>_, _δ<sub>2</sub>_, _δ<sub>3</sub>_ and _δ<sub>4</sub>_
|
||||
and determine which edge (_δ<sub>2</sub>_, _δ<sub>3</sub>_) or
|
||||
and determines which edge (_δ<sub>2</sub>_, _δ<sub>3</sub>_) or
|
||||
blossom (_δ<sub>4</sub>_) achieves the minimum value.
|
||||
|
||||
The total number of dual updates during a matching may be _Θ(n<sup>2</sup>)_.
|
||||
Since we want to limit the total number of steps of the matching algorithm to _O(n<sup>3</sup>)_,
|
||||
each dual update may use at most _O(n)_ steps.
|
||||
A naive implementation might compute _δ_ by looping over the vertices, blossoms and edges
|
||||
in the graph.
|
||||
The total number of delta steps during a matching may be _Θ(n<sup>2</sup>)_,
|
||||
pushing the total time for _δ_ calculations to _O(n<sup>2</sup> m)_ which is much too slow.
|
||||
[[4]](#galil_micali_gabow1986) introduces a combination of data structures from which
|
||||
the value of _δ_ can be computed efficiently.
|
||||
|
||||
We can find _δ<sub>1</sub>_ in _O(n)_ steps by simply looping over all vertices
|
||||
and checking their dual variables.
|
||||
We can find _δ<sub>4</sub>_ in _O(n)_ steps by simply looping over all non-trivial blossoms
|
||||
(since there are fewer than _n_ non-trivial blossoms).
|
||||
We could find _δ<sub>2</sub>_ and _δ<sub>3</sub>_ by simply looping over
|
||||
all edges of the graph in _O(m)_ steps, but that exceeds our budget of _O(n)_ steps.
|
||||
So we need better techniques.
|
||||
_δ<sub>1</sub>_ is the minimum dual value of any S-vertex.
|
||||
This value can be computed in constant time.
|
||||
The dual value of an unmatched vertex is reduced by _δ_ during every delta step.
|
||||
Since all vertex duals start with the same dual value _u<sub>start</sub>_,
|
||||
all unmatched vertices have dual value _δ<sub>1</sub> = u<sub>start</sub> - Δ_.
|
||||
|
||||
_δ<sub>3</sub>_ is half of the minimum slack of any edge between two different S-blossoms.
|
||||
To compute this efficiently, we keep edges between S-blossoms in a priority queue.
|
||||
The edges are inserted into the queue during scanning of newly labeled S-vertices.
|
||||
To compute _δ<sub>3</sub>_, we simply find the minimum-priority element of the queue.
|
||||
|
||||
For _δ<sub>2</sub>_, we determine the least-slack edge between an S-blossom and unlabeled
|
||||
blossom as follows.
|
||||
For every vertex _y_ in any unlabeled blossom, keep track of _e<sub>y</sub>_:
|
||||
the least-slack edge that connects _y_ to any S-vertex.
|
||||
The thing to keep track of is the identity of the edge, not the slack value.
|
||||
This information is kept up-to-date as part of the procedure that considers S-vertices.
|
||||
The scanning procedure eventually considers all edges _(x, y, w)_ where _x_ is an S-vertex.
|
||||
At that moment _e<sub>y</sub>_ is updated if the new edge has lower slack.
|
||||
A complication may occur when a new blossom is created.
|
||||
Edges that connect different top-level S-blossoms before creation of the new blossom,
|
||||
may end up as internal edges inside the newly created blossom.
|
||||
This implies that such edges would have to be removed from the _δ<sub>3</sub>_ priority queue,
|
||||
but that would be quite difficult.
|
||||
Instead, we just let those edges stay in the queue.
|
||||
When computing the value of _δ<sub>3</sub>_, we thus have to check whether the minimum
|
||||
element represents an edge between different top-level blossoms.
|
||||
If not, we discard such stale elements until we find an edge that does.
|
||||
|
||||
Calculating _δ<sub>2</sub>_ then becomes a matter of looping over all vertices _x_,
|
||||
checking whether _B(x)_ is unlabeled and calculating the slack of _e<sub>x</sub>_.
|
||||
A complication occurs when dual variables are updated.
|
||||
At that point, the slack of any edge between different S-blossoms decreases by _2\*δ_.
|
||||
But we can not afford to update the priorities of all elements in the queue.
|
||||
To solve this, we set the priority of each edge to its _modified slack_.
|
||||
|
||||
One subtle aspect of this technique is that a T-vertex can loose its label when
|
||||
the containing T-blossom gets expanded.
|
||||
At that point, we suddenly need to have kept track of its least-slack edge to any S-vertex.
|
||||
It is therefore necessary to do the same tracking also for T-vertices.
|
||||
So the technique is: For any vertex that is not an S-vertex, track its least-slack edge
|
||||
to any S-vertex.
|
||||
The _modified slack_ of an edge is defined as follows:
|
||||
|
||||
Another subtle aspect is that a T-vertex may have a zero-slack edge to an S-vertex.
|
||||
Even though these edges are already tight, they must still be tracked.
|
||||
If the T-vertex loses its label, this edge needs to be reconsidered by the scanning procedure.
|
||||
By including these edges in least-slack edge tracking, they will be rediscovered
|
||||
through a _δ<sub>2</sub>=0_ update after the vertex becomes unlabeled.
|
||||
$$ \pi'_{x,y} = u'_x + u'_y - w_{x,y} $$
|
||||
|
||||
For _δ<sub>3</sub>_, we determine the least-slack edge between any pair of S-blossoms
|
||||
as follows.
|
||||
For every S-blossom _B_, keep track of _e<sub>B</sub>_:
|
||||
the least-slack edge between _B_ and any other S-blossom.
|
||||
Note that this is done per S-blossoms, not per S-vertex.
|
||||
This information is kept up-to-date as part of the procedure that considers S-vertices.
|
||||
Calculating _δ<sub>3</sub>_ then becomes a matter of looping over all S-blossoms _B_
|
||||
and calculating the slack of _e<sub>B</sub>_.
|
||||
The modified slack is computed in the same way as true slack, except it uses
|
||||
the modified vertex duals instead of true vertex duals.
|
||||
Blossom duals are ignored since we will never compute the modified slack of an edge that
|
||||
is contained inside a blossom.
|
||||
Because modified vertex duals are invariant under delta steps, so is the modified edge slack.
|
||||
As a result, the priorities of edges in the priority queue remain unchanged during a delta step.
|
||||
|
||||
A complication occurs when S-blossoms are merged.
|
||||
Some of the least-slack edges of the sub-blossoms may be internal edges in the merged blossom,
|
||||
and therefore irrelevant for _δ<sub>3</sub>_.
|
||||
As a result, the proper _e<sub>B</sub>_ of the merged blossom may be different from all
|
||||
least-slack edges of its sub-blossoms.
|
||||
An additional data structure is needed to find _e<sub>B</sub>_ of the merged blossom.
|
||||
_δ<sub>4</sub>_ is half of the minimum dual variable of any T-blossom.
|
||||
To compute this efficiently, we keep non-trivial T-blossoms in a priority queue.
|
||||
The blossoms are inserted into the queue when they become a T-blossom and removed from
|
||||
the queue when they stop being a T-blossom.
|
||||
|
||||
For every S-blossom _B_, maintain a list _L<sub>B</sub>_ of edges between _B_ and
|
||||
other S-blossoms.
|
||||
The purpose of _L<sub>B</sub>_ is to contain, for every other S-blossom, the least-slack edge
|
||||
between _B_ and that blossom.
|
||||
These lists are kept up-to-date as part of the procedure that considers S-vertices.
|
||||
While considering vertex _x_, if edge _(x, y, w)_ has positive slack,
|
||||
and _B(y)_ is an S-blossom, the edge is inserted in _L<sub>B(x)</sub>_.
|
||||
This may cause _L<sub>B(x)</sub>_ to contain multiple edges to _B(y)_.
|
||||
That's okay as long as it definitely contains the least-slack edge to _B(y)_.
|
||||
A complication occurs when dual variables are updated.
|
||||
At that point, the dual variable of any T-blossom decreases by _2\*δ_.
|
||||
But we can not afford to update the priorities of all elements in the queue.
|
||||
To solve this, we set the priority of each blossom to its _modified_ dual value
|
||||
_z'<sub>B</sub> = z<sub>B</sub> + 2\*Δ_.
|
||||
These values are invariant under delta steps.
|
||||
|
||||
When a new S-blossom is created, form its list _L<sub>B</sub>_ by merging the lists
|
||||
of its sub-blossoms.
|
||||
Ignore edges that are internal to the merged blossom.
|
||||
If there are multiple edges to the same target blossom, keep only the least-slack of these edges.
|
||||
Then find _e<sub>B</sub>_ of the merged blossom by simply taking the least-slack edge
|
||||
out of _L<sub>B</sub>_.
|
||||
_δ<sub>2</sub>_ is the minimum slack of any edge between an S-vertex and unlabeled vertex.
|
||||
To compute this efficiently, we use a fairly complicated strategy that involves
|
||||
three levels of priority queues.
|
||||
|
||||
At the lowest level, every T-vertex or unlabeled vertex maintains a separate priority queue
|
||||
of edges between itself and any S-vertex.
|
||||
Edges are inserted into this queue during scanning of newly labeled S-vertices.
|
||||
Note that S-vertices do not maintain this type of queue.
|
||||
|
||||
The priorities of edges in these queues are set to their _modified slack_.
|
||||
This ensures that the priorities remain unchanged during delta steps.
|
||||
The priorities also remain unchanged when the T-vertex becomes unlabeled or the unlabeled
|
||||
vertex becomes a T-vertex.
|
||||
|
||||
At the middle level, every T-blossom or unlabeled top-level blossom maintains a priority queue
|
||||
containing its vertices.
|
||||
This is in fact the _concatenable priority queue_ that is maintained by every top-level blossom
|
||||
as was described earlier in this document.
|
||||
The priority of each vertex in the mid-level queue is set to the minimum priority of any edge
|
||||
in the low-level queue of that vertex.
|
||||
If edges are added to (or removed from) the low-level queue, the priority of the corresponding
|
||||
vertex in the mid-level queue may change.
|
||||
If the low-level queue of a vertex is empty, that vertex has priority _Infinity_
|
||||
in the mid-level queue.
|
||||
|
||||
At the highest level, unlabeled top-level blossoms are tracked in one global priority queue.
|
||||
The priority of each blossom in this queue is set to the minimum slack of any edge
|
||||
from that blossom to an S-vertex plus _Δ_.
|
||||
These priorities are invariant under delta steps.
|
||||
|
||||
To compute _δ<sub>2</sub>_, we find the minimum priority in the high-level queue
|
||||
and adjust it by _Δ_.
|
||||
To find the edge associated with _δ<sub>2</sub>_,
|
||||
we use the high-level queue to find the unlabeled blossom with minimum priority,
|
||||
then use that blossom's mid-level queue to find the vertex with minimum priority,
|
||||
then use that vertex's low-level queue to find the edge with minimum priority.
|
||||
|
||||
The whole thing is a bit tricky, but it works.
|
||||
|
||||
### Re-using alternating trees
|
||||
|
||||
According to [[5]](#galil1986), labels and alternating trees should be erased at the end of each stage.
|
||||
However, the algorithm can be optimized by keeping some of the labels and re-using them
|
||||
in the next stage.
|
||||
The optimized algorithm erases _only_ the two alternating trees that are part of
|
||||
the augmenting path.
|
||||
All blossoms in those two trees lose their labels and become free blossoms again.
|
||||
Other alternating trees, which are not involved in the augmenting path, are preserved
|
||||
into the next stage, and so are the labels on the blossoms in those trees.
|
||||
|
||||
This optimization is well known and is described for example in [[7]](#mehlhorn_schafer2002).
|
||||
It does not affect the worst-case asymptotic run time of the algorithm,
|
||||
but it provides a significant practical speed up for many types of graphs.
|
||||
|
||||
Erasing alternating trees is easy enough, but selectively stripping labels off blossoms
|
||||
has a few implications.
|
||||
The blossoms that lose their labels need to have their modified dual values updated.
|
||||
The T-blossoms additionally need to have their _offset<sub>B</sub>_ variables updated
|
||||
to keep the vertex dual values consistent.
|
||||
For S-blossoms that lose their labels, the modified vertex dual variables are updated directly.
|
||||
|
||||
The various priority queues also need updating.
|
||||
Former T-blossoms must be removed from the priority queue for _δ<sub>4</sub>_.
|
||||
Edges incident on former S-vertices must be removed from the priority queues for _δ<sub>3</sub>_ and _δ<sub>2</sub>_.
|
||||
Finally, S-vertices that become unlabeled need to construct a proper priority queue
|
||||
of incident edges to other S-vertices for _δ<sub>2</sub>_ tracking.
|
||||
This involves visiting every incident edge of every vertex in each S-blossom that loses its label.
|
||||
|
||||
## Run time of the algorithm
|
||||
|
||||
Every stage of the algorithm either increases the number of matched vertices by 2 or
|
||||
ends the matching.
|
||||
Therefore the number of stages is at most _n/2_.
|
||||
Every stage runs in _O(n<sup>2</sup>)_ steps, therefore the complete algorithm runs in
|
||||
_O(n<sup>3</sup>)_ steps.
|
||||
|
||||
During each stage, edge scanning is driven by the queue _Q_.
|
||||
Every vertex enters _Q_ at most once.
|
||||
Each vertex that enters _Q_ has its incident edges scanned, causing every edge in the graph
|
||||
to be scanned at most twice per stage.
|
||||
Scanning an edge is done in constant time, unless it leads to the discovery of a blossom
|
||||
or an augmenting path, which will be separately accounted for.
|
||||
Therefore edge scanning needs _O(m)_ steps per stage.
|
||||
Every stage runs in time _O((n + m) log n)_, therefore the complete algorithm runs in
|
||||
time _O(n (n + m) log n)_.
|
||||
|
||||
Creating a blossom reduces the number of top-level blossoms by at least 2,
|
||||
thus limiting the number of simultaneously existing blossoms to _O(n)_.
|
||||
Blossoms that are created during a stage become S-blossoms and survive until the end of that stage,
|
||||
therefore _O(n)_ blossoms are created during a stage.
|
||||
Creating a blossom with _k_ sub-blossoms reduces the number of top-level blossoms by _k-1_,
|
||||
thus limiting the total number of sub-blossoms that can be involved in blossom creation
|
||||
during a stage to _O(n)_.
|
||||
|
||||
Creating a blossom involves tracing the alternating path to the closest common ancestor,
|
||||
and some bookkeeping per sub-blossom,
|
||||
and inserting new S-vertices _Q_,
|
||||
all of which can be done in _O(n)_ steps per blossom creation.
|
||||
The cost of managing least-slack edges between S-blossoms will be separately accounted for.
|
||||
Therefore blossom creation needs _O(n<sup>2</sup>)_ steps per stage
|
||||
(excluding least-slack edge management).
|
||||
which takes time _O(k log n)_ for a blossom with _k_ sub-blossoms.
|
||||
It also involves bookkeeping per sub-blossom, which takes time _O(log n)_ per sub-blossom.
|
||||
It also involves relabeling former T-vertices as S-vertices, but I account for that
|
||||
time separately below so I can ignore it here.
|
||||
It also involves merging the concatenable queues which track the vertices in top-level blossoms.
|
||||
Merging two queues takes time _O(log n)_, therefore merging the queues of all sub-blossoms
|
||||
takes time _O(k log n)_.
|
||||
Creating a blossom thus takes time _O(k log n)_.
|
||||
Blossom creation thus takes total time _O(n log n)_ per stage.
|
||||
|
||||
As part of creating a blossom, a list _L<sub>B</sub>_ of least-slack edges must be formed.
|
||||
This involves processing every element of all least-slack edge lists of its sub-blossoms,
|
||||
and removing redundant edges from the merged list.
|
||||
Merging and removing redundant edges can be done in one sweep via a temporary array indexed
|
||||
by target blossom.
|
||||
Collect the least-slack edges of the sub-blossoms into this array,
|
||||
indexed by the target blossom of the edge,
|
||||
keeping only the edge with lowest slack per target blossom.
|
||||
Then convert the array back into a list by removing unused indices.
|
||||
This takes _O(1)_ steps per candidate edge, plus _O(n)_ steps to manage the temporary array.
|
||||
I choose to shift the cost of collecting the candidate edges from the sub-blossoms to
|
||||
the actions that inserted those edges into the sub-blossom lists.
|
||||
There are two processes which insert edges into _L<sub>B</sub>_: edge scanning and blossom
|
||||
creation.
|
||||
Edge scanning inserts each graph edge at most twice per stage for a total cost of _O(m)_ steps
|
||||
per stage.
|
||||
Blossom creation inserts at most _O(n)_ edges per blossom creation.
|
||||
Therefore the total cost of S-blossom least-slack edge management is
|
||||
_O(m + n<sup>2</sup>) = O(n<sup>2</sup>)_ steps per stage.
|
||||
During each stage, a blossom becomes an S-blossom or T-blossom at most once.
|
||||
A blossom also becomes unlabeled at most once, at the end of the stage.
|
||||
Changing the label of a blossom takes some simple bookkeeping, as well as operations
|
||||
on priority queues (_δ<sub>4</sub>_ for T-blossoms, _δ<sub>2</sub>_ for unlabeled
|
||||
blossoms) which take time _O(log n)_ per blossom.
|
||||
Assigning label S or removing label S also involves work for the vertices in the blossom
|
||||
and their edges, but I account for that time separately below so I can ignore it here.
|
||||
Blossom labeling thus takes total time _O(n log n)_ per stage.
|
||||
|
||||
The number of blossom expansions during a stage is _O(n)_.
|
||||
Expanding a blossom involves some bookkeeping per sub-blossom,
|
||||
and reconstructing the alternating path through the blossom,
|
||||
and inserting any new S-vertices into _Q_,
|
||||
all of which can be done in _O(n)_ steps per blossom.
|
||||
Therefore blossom expansion needs _O(n<sup>2</sup>)_ steps per stage.
|
||||
During each stage, a vertex becomes an S-vertex at most once, and an S-vertex becomes
|
||||
unlabeled at most once.
|
||||
In both cases, the incident edges of the affected vertex are scanned and potentially
|
||||
added to or removed from priority queues.
|
||||
This involves finding the top-level blossoms of the endpoints of each edge, which
|
||||
takes time _O(log n)_ per edge.
|
||||
The updates to priority queues also take time _O(log n)_ per edge.
|
||||
Edge scanning thus takes total time _O(m log n)_ per stage.
|
||||
|
||||
Note that _m ≤ n<sup>2</sup>_ therefore _log m ≤ 2 log n_.
|
||||
This implies that an operation on a priority queue with _m_ elements takes time _O(log n)_.
|
||||
|
||||
Expanding a blossom involves some bookkeeping which takes time _O(log n)_ per sub-blossom.
|
||||
It also involves splitting the concatenable queue that tracks the vertices in top-level blossoms,
|
||||
which takes time _O(log n)_ per sub-blossom.
|
||||
In case of a T-blossom, it also involves reconstructing the alternating path through
|
||||
the blossom which takes time _O(k log n)_ for _k_ sub-blossoms.
|
||||
Also in case of a T-blossom, some sub-blossoms will become S-blossoms and their
|
||||
vertices become S-vertices, but I have already accounted for that cost above
|
||||
so I can ignore it here.
|
||||
Expanding a blossom thus takes time _O(k log n)_.
|
||||
Any blossom is involved as a sub-blossom in an expanding blossom at most once per stage.
|
||||
Blossom expansion thus takes total time _O(n log n)_ per stage.
|
||||
|
||||
The length of an augmenting path is _O(n)_.
|
||||
Tracing the augmenting path and augmenting the matching along the path can be done in _O(n)_ steps.
|
||||
Augmenting through a blossom takes a number of steps that is proportional in the number of
|
||||
Tracing the augmenting path and augmenting the matching along the path can be done
|
||||
in _O(n log n)_ steps.
|
||||
Augmenting through a blossom takes a number of steps that is proportional to the number of
|
||||
its sub-blossoms.
|
||||
Since there are fewer than _n_ non-trivial blossoms, the total cost of augmenting through
|
||||
blossoms is _O(n)_ steps.
|
||||
Therefore the total cost of augmenting is _O(n)_ steps per stage.
|
||||
Augmenting thus takes total time _O(n log n)_ per stage.
|
||||
|
||||
A dual variable update limited by _δ<sub>1</sub>_ ends the algorithm and therefore
|
||||
happens at most once.
|
||||
An update limited by _δ<sub>2</sub>_ labels a previously labeled blossom
|
||||
and therefore happens _O(n)_ times per stage.
|
||||
An update limited by _δ<sub>3</sub>_ either creates a blossom or finds an augmenting path
|
||||
and therefore happens _O(n)_ times per stage.
|
||||
An update limited by _δ<sub>4</sub>_ expands a blossom and therefore happens
|
||||
_O(n)_ times per stage.
|
||||
Therefore the number of dual variable updates is _O(n)_ per stage.
|
||||
The cost of calculating the _δ_ values is _O(n)_ per update as discussed above.
|
||||
Applying changes to the dual variables can be done by looping over all vertices and looping over
|
||||
all top-level blossoms in _O(n)_ steps per update.
|
||||
Therefore the total cost of dual variable updates is _O(n<sup>2</sup>)_ per stage.
|
||||
A delta step limited by _δ<sub>1</sub>_ ends the algorithm and therefore happens at most once.
|
||||
A _δ<sub>2</sub>_ step assigns a label to a previously unlabeled blossom and therefore
|
||||
happens _O(n)_ times per stage.
|
||||
A _δ<sub>3</sub>_ step either creates a blossom or finds an augmenting path and therefore
|
||||
happens _O(n)_ times per stage.
|
||||
A _δ<sub>4</sub>_ step expands a blossom and therefore happens _O(n)_ times per stage.
|
||||
Therefore the number of delta steps is _O(n)_ per stage.
|
||||
|
||||
Calculating _δ<sub>1</sub>_ takes constant time.
|
||||
Calculating _δ<sub>2</sub>_ and _δ<sub>4</sub>_ requires a constant number
|
||||
of lookups in priority queues which takes time _O(log n)_ per delta step.
|
||||
During calculation of _δ<sub>3</sub>_, it may be necessary to remove stage edges
|
||||
from the priority queue.
|
||||
Since every edge is inserted into the priority queue at most once per stage,
|
||||
at most _O(m)_ edges are removed per stage, which takes total time _O(m log n)_ per stage.
|
||||
Calculation of _δ_ thus takes total time _O((n + m) log n)_ per stage.
|
||||
|
||||
Applying updates to dual variables is done in a lazy fashion as discussed above.
|
||||
The only variable that is updated directly is _Δ_, which takes time _O(1)_ per delta step.
|
||||
Updating dual variables thus takes total time _O(n)_ per stage.
|
||||
|
||||
## Implementation details
|
||||
|
||||
|
@ -819,17 +1006,100 @@ Every edge therefore appears in two adjacency lists.
|
|||
These data structures are initialized at the start of the matching algorithm
|
||||
and never change while the algorithm runs.
|
||||
|
||||
#### Priority queue
|
||||
|
||||
Priority queues are used for a number of purposes:
|
||||
|
||||
- a priority queue to find the least-slack edge between S-blossoms;
|
||||
- a priority queue to find the minimum-dual T-blossom;
|
||||
- a priority queue to find the unlabeled blossom with least-slack edge to an S-blossom;
|
||||
- a separate priority queue per vertex to find the least-slack edge between that vertex
|
||||
and any S-vertex.
|
||||
|
||||
These queues are implemented as a binary heaps.
|
||||
This type of queue supports the following operations:
|
||||
|
||||
- _insert_ a new element with specified priority in time _O(log n)_;
|
||||
- find the element with _minimum_ priority in time _O(1)_;
|
||||
- _delete_ a specified element in time _O(log n)_;
|
||||
- _change_ the priority of a specified element in time _O(log n)_.
|
||||
|
||||
#### Concatenable priority queue
|
||||
|
||||
Each top-level blossom maintains a concatenable priority queue containing its vertices.
|
||||
We use a specific type of concatenable queue that supports the following operations
|
||||
[[4]](#galil_micali_gabow1986) [[8]](#aho_hopcroft_ullman1974):
|
||||
|
||||
- _create_ a new queue containing 1 new element;
|
||||
- find the element with _minimum_ priority in time _O(1)_;
|
||||
- _change_ the priority of a given element in time _O(log n)_;
|
||||
- _merge_ two queues into one new queue in time _O(log n)_;
|
||||
- _split_ a queue, thus undoing the previous _merge_ step in time _O(log n)_.
|
||||
|
||||
The efficient _merge_ and _split_ operations make it possible to adapt the queue during
|
||||
blossom creation and blossom expansion steps.
|
||||
The priorities in the queue are used to find, for a given top-level blossom, its vertex
|
||||
with least-slack edge to an S-blossom.
|
||||
|
||||
The concatenable queue is implemented as a balanced tree, specifically a _2-3 tree_.
|
||||
Each internal node of a 2-3 tree has either 2 or 3 children.
|
||||
The leaf nodes of the tree represent the elements of the queue.
|
||||
All leaf nodes are at the same distance from the root.
|
||||
Each node has a pointer to its parent, and each internal node has pointers to its children.
|
||||
Each internal node also stores its height (distance to its leaf nodes).
|
||||
|
||||
Only leaf nodes have a priority.
|
||||
However, each internal node maintains a pointer to the leaf node with minimum priority
|
||||
within its subtree.
|
||||
As a consequence, the root of the tree has a pointer to the least-priority element in the queue.
|
||||
To keep this information consistent, any change in the priority of a leaf node must
|
||||
be followed by updating the internal nodes along a path from the leaf node to the root.
|
||||
The same must be done when the structure of the tree is adjusted.
|
||||
|
||||
The left-to-right order of the leaf nodes is preserved during all operations, including _merge_
|
||||
and _split_.
|
||||
When trees _A_ and _B_ are merged, the sequence of leaf nodes in the merged tree will consist of
|
||||
the leaf nodes of _A_ followed by the leaf nodes of _B_.
|
||||
Note that the left-to-right order of the leaf nodes is unrelated to the priorities of the elements.
|
||||
|
||||
To merge two trees, the root of the smaller tree is inserted as a child of an appropriate node
|
||||
in the larger tree.
|
||||
Subsequent adjustments are needed restore the consistency the 2-3 tree and to update
|
||||
the minimum-element pointers of the internal nodes along a path from the insertion point
|
||||
to the root of the merged tree.
|
||||
This can be done in a number of steps proportional to the difference in height between
|
||||
the two trees, which is in any case _O(log n)_.
|
||||
|
||||
To split a tree, a _split node_ is identified: the left-most leaf node that must end up
|
||||
in the right-side tree after splitting.
|
||||
Internal nodes are deleted along the path from the _split node_ to the root of the tree.
|
||||
This creates a forest of disconnected subtrees on the left side of the path,
|
||||
and a similar forest of subtrees on the right side of the split path.
|
||||
The left-side subtrees are reassembled into a single tree through a series of _merge_ steps.
|
||||
Although _O(log n)_ merge steps may be needed, the total time required for reassembly
|
||||
is also _O(log n)_ because each merge step combines trees of similar height.
|
||||
A similar reassembly is done for the forest on the right side of the split path.
|
||||
|
||||
The concatenable queues have an additional purpose in the matching algorithm:
|
||||
finding the top-level blossom that contains a given vertex.
|
||||
To do this, we assign a _name_ to each concatenable queue instance, which is simply
|
||||
a pointer to the top-level blossom that maintains the queue.
|
||||
An extra operation is defined:
|
||||
_find_ the name of the queue instance that contains a given element in time _O(log n)_.
|
||||
Implementing the _find_ operation is easy:
|
||||
Starting at the leaf node that represents the element, follow _parent_ pointers
|
||||
to the root of the tree.
|
||||
The root node contains the name of the queue.
|
||||
|
||||
#### General data
|
||||
|
||||
`vertex_mate[x] = y` if the edge between vertex _x_ and vertex _y_ is matched. <br>
|
||||
`vertex_mate[x] = -1` if vertex _x_ is unmatched.
|
||||
|
||||
`vertex_top_blossom[x] =` pointer to _B(x)_, the top-level blossom that contains vertex _x_.
|
||||
`vertex_dual[x]` holds the modified vertex dual _u'<sub>x</sub>_.
|
||||
|
||||
`vertex_dual[x]` holds the value of _u<sub>x</sub>_.
|
||||
|
||||
A FIFO queue holds vertex indices of S-vertices whose edges have not yet been scanned.
|
||||
Vertices are inserted in this queue as soon as their top-level blossom gets label S.
|
||||
A global list holds vertex indices of S-vertices whose edges have not yet been scanned.
|
||||
Vertices are inserted in this list when their top-level blossom gets label S.
|
||||
|
||||
#### Blossoms
|
||||
|
||||
|
@ -842,56 +1112,28 @@ Both types of blossoms are represented as class instances with the following att
|
|||
* `B.label` is `S` or `T` or `None`
|
||||
* `B.tree_edge = (x, y)` if _B_ is a labeled top-level blossom, where _y_ is a vertex in _B_
|
||||
and _(x, y)_ is the edge that links _B_ to its parent in the alternating tree.
|
||||
* `B.tree_blossoms` points to a list of blossoms in the same alternating tree, if _B_
|
||||
is a labeled top-level blossom.
|
||||
* `B.vertex_dual_offset` holds the pending change to vertex duals _offset<sub>B</sub>_.
|
||||
|
||||
A non-trivial blossom additionally has the following attributes:
|
||||
|
||||
* `B.subblossoms` is an array of pointers to the sub-blossoms of _B_,
|
||||
starting with the sub-blossom that contains the base vertex.
|
||||
* `B.edges` is an array of alternating edges connecting the sub-blossoms.
|
||||
* `B.dual_var` holds the value of _z<sub>B</sub>_.
|
||||
* `B.dual_var` holds the modified blossom dual _z'<sub>B</sub>_.
|
||||
|
||||
Single-vertex blossoms are kept in an array indexed by vertex index. <br>
|
||||
Non-trivial blossoms are kept in a separate array. <br>
|
||||
Non-trivial blossoms are kept in a separate list. <br>
|
||||
These arrays are used to iterate over blossoms and to find the trivial blossom
|
||||
that consists of a given vertex.
|
||||
|
||||
#### Least-slack edge tracking
|
||||
|
||||
`vertex_best_edge[x]` is an array holding _e<sub>x</sub>_, the edge index of
|
||||
the least-slack edge between vertex _x_ and any S-vertex, or -1 if there is no such edge.
|
||||
This value is only meaningful if _x_ is a T-vertex or unlabeled vertex.
|
||||
|
||||
`B.best_edge` is a blossom attribute holding _e<sub>B</sub>_, the edge index of the least-slack
|
||||
edge between blossom _B_ and any other S-blossom, or -1 if there is no such edge.
|
||||
This value is only meaningful if _B_ is a top-level S-blossom.
|
||||
|
||||
For non-trivial S-blossoms _B_, attribute `B.best_edge_set` holds the list _L<sub>B</sub>_
|
||||
of potential least-slack edges to other blossoms.
|
||||
This list is not maintained for single-vertex blossoms, since _L<sub>B</sub>_ of a single vertex
|
||||
can be efficiently constructed from its adjacency list.
|
||||
|
||||
#### Memory usage
|
||||
|
||||
The data structures described above use a constant amount of memory per vertex and per edge
|
||||
and per blossom.
|
||||
Therefore the total memory requirement is _O(m + n)_.
|
||||
|
||||
The memory usage of _L<sub>B</sub>_ is a little tricky.
|
||||
Any given list _L<sub>B</sub>_ can have length _O(n)_, and _O(n)_ of these lists can exist
|
||||
simultaneously.
|
||||
Naively allocating space for _O(n)_ elements per list will drive memory usage
|
||||
up to _O(n<sup>2</sup>)_.
|
||||
However, at any moment, an edge can be in at most two of these lists, therefore the sum
|
||||
of the lengths of these lists is limited to _O(m)_.
|
||||
A possible solution is to implement the _L<sub>B</sub>_ as linked lists.
|
||||
|
||||
### Performance critical routines
|
||||
|
||||
Calculations that happen very frequently in the algorithm are:
|
||||
determining the top-level blossom of a given vertex, and calculating the slack of a given edge.
|
||||
These calculations must run in constant time per call in any case, but it makes sense to put
|
||||
some extra effort into making these calculations _fast_.
|
||||
|
||||
### Recursion
|
||||
|
||||
Certain tasks in the algorithm are recursive in nature:
|
||||
|
@ -948,59 +1190,41 @@ Proof by induction that all vertex duals are multiples of 0.5 and all blossom du
|
|||
- Blossom duals increase or decrease by _2\*δ_,
|
||||
therefore updated blossom duals are still integers.
|
||||
|
||||
The value of vertex dual variables and blossom dual variables never exceeds the
|
||||
greatest edge weight in the graph.
|
||||
This may be helpful for choosing an integer data type for the dual variables.
|
||||
It is useful to know that (modified) dual variables and (modified) edge slacks
|
||||
are limited to a finite range of values which depends on the maximum edge weight.
|
||||
This may be helpful when choosing an integer data type for these variables.
|
||||
(Alternatively, choose a programming language with unlimited integer range.
|
||||
This is perhaps the thing I love most about Python.)
|
||||
|
||||
Proof that dual variables do not exceed _max-weight_:
|
||||
|
||||
- Vertex dual variables start at _u<sub>x</sub> = 0.5\*max-weight_.
|
||||
- While the algorithm runs, there is at least one vertex which has been unmatched
|
||||
since the beginning.
|
||||
This vertex has always had label S, therefore its dual always decreased by _δ_
|
||||
during a dual variable update.
|
||||
Since it started at _0.5\*max-weight_ and can not become negative,
|
||||
the sum of _δ_ over all dual variable updates can not exceed _0.5\*max-weight_.
|
||||
- Vertex dual variables increase by at most _δ_ per update.
|
||||
Therefore no vertex dual can increase by more than _0.5\*max-weight_ in total.
|
||||
Therefore no vertex dual can exceed _max-weight_.
|
||||
- Blossom dual variables start at _z<sub>B</sub> = 0_.
|
||||
- Blossom dual variables increase by at most _2\*δ_ per update.
|
||||
Therefore no blossom dual can increase by more than _max-weight_ in total.
|
||||
Therefore no blossom dual can exceed _max-weight_.
|
||||
- The value of _Δ_ (sum over _δ_ steps) does not exceed _maxweight / 2_.
|
||||
Proof:
|
||||
- Vertex dual variables start at _u<sub>x</sub> = maxweight_ / 2.
|
||||
- While the algorithm runs, there is at least one vertex which has been unmatched
|
||||
since the beginning.
|
||||
This vertex has always had label S, therefore its dual is _maxweight/2 - Δ_.
|
||||
Vertex deltas can not be negative, therefore _Δ ≤ maxweight/2_.
|
||||
- Vertex duals are limited to the range 0 to _maxweight_.
|
||||
- Blossom duals are limited to the range 0 to _maxweight_.
|
||||
- Edge slack is limited to the range 0 to _2\*maxweight_.
|
||||
- Modified vertex duals are limited to the range 0 to _1.5\*maxweight_.
|
||||
- Modified blossom duals are limited to the range _-maxweight to 2\*maxweight_.
|
||||
- Modified edge slack is limited to the range 0 to _3\*maxweight_.
|
||||
- Dual offsets are limited to the range _-maxweight/2_ to _maxweight/2_.
|
||||
|
||||
### Handling floating point edge weights
|
||||
|
||||
Floating point calculations are subject to rounding errors.
|
||||
This has two consequences for the matching algorithm:
|
||||
As a result, the algorithm may return a matching which has slightly lower weight than
|
||||
the actual maximum weight.
|
||||
|
||||
- The algorithm may return a matching which has slightly lower weight than
|
||||
the actual maximum weight.
|
||||
The algorithm will allways return a valid matching, even if rounding errors occur.
|
||||
Floating point comparisons affect which actions are taken during delta steps,
|
||||
and thus eventually determine which edges are matched.
|
||||
But the overall structure of the algorithm guarantees that it will eventually return
|
||||
a valid (if possibly suboptimal) matching.
|
||||
|
||||
- The algorithm may not reliably recognize tight edges.
|
||||
To check whether an edge is tight, its slack is compared to zero.
|
||||
Rounding errors may cause the slack to appear positive even when an exact calculation
|
||||
would show it to be zero.
|
||||
The slack of some edges may even become slightly negative.
|
||||
|
||||
I believe this does not affect the correctness of the algorithm.
|
||||
An edge that should be tight but is not recognized as tight due to rounding errors,
|
||||
can be pulled tight through an additional dual variable update.
|
||||
As side-effect of this update, the edge will immediately be used to grow the alternating tree,
|
||||
or construct a blossom or augmenting path.
|
||||
This mechanism allows the algorithm to make progress, even if slack comparisons
|
||||
are repeatedly thrown off by rounding errors.
|
||||
Rounding errors may cause the algorithm to perform more dual variable updates
|
||||
than strictly necessary.
|
||||
But this will still not cause the run time of the algorithm to exceed _O(n<sup>3</sup>)_.
|
||||
|
||||
It seems to me that the matching algorithm is stable for floating point weights.
|
||||
And it seems to me that it returns a matching which is close to optimal,
|
||||
and could have been optimal if edge weights were changed by small amounts.
|
||||
|
||||
I must admit these arguments are mostly based on intuition.
|
||||
The most challenging cases are probably graphs with edge weights that differ by many orders
|
||||
of magnitude.
|
||||
Unfortunately I don't know how to properly analyze the floating point accuracy of this algorithm.
|
||||
|
||||
### Finding a maximum weight matching out of all maximum cardinality matchings
|
||||
|
@ -1057,7 +1281,14 @@ changing all weights by the same amount doesn't change which of these matchings
|
|||
([link](https://dl.acm.org/doi/abs/10.5555/320176.320229))
|
||||
([pdf](https://dl.acm.org/doi/pdf/10.5555/320176.320229))
|
||||
|
||||
7. <a id="mehlhorn_schafer2002"></a>
|
||||
Kurt Mehlhorn, Guido Schäfer, "Implementation of O(nm log(n)) Weighted Matchings in General Graphs: The Power of Data Structures", _Journal of Experimental Algorithmics vol. 7_, 2002.
|
||||
([link](https://dl.acm.org/doi/10.1145/944618.944622))
|
||||
|
||||
8. <a id="aho_hopcroft_ullman1974"></a>
|
||||
Alfred V. Aho, John E. Hopcroft, Jeffrey D. Ullman,
|
||||
_The Design and Analysis of Computer Algorithms_, Addison-Wesley, 1974.
|
||||
|
||||
---
|
||||
Written in 2023 by Joris van Rantwijk.
|
||||
Written in 2023-2024 by Joris van Rantwijk.
|
||||
This work is licensed under [CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0/).
|
||||
|
|
1
pylintrc
1
pylintrc
|
@ -13,6 +13,7 @@ disable=consider-using-in,
|
|||
too-many-lines,
|
||||
too-many-locals,
|
||||
too-many-nested-blocks,
|
||||
too-many-positional-arguments,
|
||||
too-many-public-methods,
|
||||
too-many-return-statements,
|
||||
too-many-statements,
|
||||
|
|
|
@ -0,0 +1,11 @@
|
|||
"""
|
||||
Algorithm for finding a maximum weight matching in general graphs.
|
||||
"""
|
||||
|
||||
__all__ = ["maximum_weight_matching",
|
||||
"adjust_weights_for_maximum_cardinality_matching",
|
||||
"MatchingError"]
|
||||
|
||||
from .algorithm import (maximum_weight_matching,
|
||||
adjust_weights_for_maximum_cardinality_matching,
|
||||
MatchingError)
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,634 @@
|
|||
"""Data structures for matching."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Generic, Optional, TypeVar
|
||||
|
||||
|
||||
_NameT = TypeVar("_NameT")
|
||||
_NameT2 = TypeVar("_NameT2")
|
||||
_ElemT = TypeVar("_ElemT")
|
||||
_ElemT2 = TypeVar("_ElemT2")
|
||||
|
||||
|
||||
class ConcatenableQueue(Generic[_NameT, _ElemT]):
|
||||
"""Priority queue supporting efficient merge and split operations.
|
||||
|
||||
This is a combination of a disjoint set and a priority queue.
|
||||
A queue has a "name", which can be any Python object.
|
||||
Each element has associated "data", which can be any Python object.
|
||||
Each element has a priority.
|
||||
|
||||
The following operations can be done efficiently:
|
||||
- Create a new queue containing one new element.
|
||||
- Find the name of the queue that contains a given element.
|
||||
- Change the priority of a given element.
|
||||
- Find the element with lowest priority in a given queue.
|
||||
- Merge two or more queues.
|
||||
- Undo a previous merge step.
|
||||
|
||||
This data structure is implemented as a 2-3 tree with minimum-priority
|
||||
tracking added to it.
|
||||
"""
|
||||
|
||||
__slots__ = ("name", "tree", "first_node", "sub_queues")
|
||||
|
||||
class BaseNode(Generic[_NameT2, _ElemT2]):
|
||||
"""Node in the 2-3 tree."""
|
||||
|
||||
__slots__ = ("owner", "min_node", "height", "parent", "childs")
|
||||
|
||||
def __init__(self,
|
||||
min_node: ConcatenableQueue.Node[_NameT2, _ElemT2],
|
||||
height: int
|
||||
) -> None:
|
||||
"""Initialize a new node."""
|
||||
self.owner: Optional[ConcatenableQueue[_NameT2, _ElemT2]] = None
|
||||
self.min_node = min_node
|
||||
self.height = height
|
||||
self.parent: Optional[ConcatenableQueue.BaseNode[_NameT2,
|
||||
_ElemT2]]
|
||||
self.parent = None
|
||||
self.childs: list[ConcatenableQueue.BaseNode[_NameT2, _ElemT2]]
|
||||
self.childs = []
|
||||
|
||||
class Node(BaseNode[_NameT2, _ElemT2]):
|
||||
"""Leaf node in the 2-3 tree, representing an element in the queue."""
|
||||
|
||||
__slots__ = ("data", "prio")
|
||||
|
||||
def __init__(self, data: _ElemT2, prio: float) -> None:
|
||||
"""Initialize a new leaf node.
|
||||
|
||||
This method should not be called directly.
|
||||
Instead, call ConcatenableQueue.insert().
|
||||
"""
|
||||
super().__init__(min_node=self, height=0)
|
||||
self.data = data
|
||||
self.prio = prio
|
||||
|
||||
def find(self) -> _NameT2:
|
||||
"""Return the name of the queue that contains this element.
|
||||
|
||||
This function takes time O(log(n)).
|
||||
"""
|
||||
node: ConcatenableQueue.BaseNode[_NameT2, _ElemT2] = self
|
||||
while node.parent is not None:
|
||||
node = node.parent
|
||||
assert node.owner is not None
|
||||
return node.owner.name
|
||||
|
||||
def set_prio(self, prio: float) -> None:
|
||||
"""Change the priority of this element.
|
||||
|
||||
This function takes time O(log(n)).
|
||||
"""
|
||||
self.prio = prio
|
||||
node = self.parent
|
||||
while node is not None:
|
||||
min_node = node.childs[0].min_node
|
||||
for child in node.childs[1:]:
|
||||
if child.min_node.prio < min_node.prio:
|
||||
min_node = child.min_node
|
||||
node.min_node = min_node
|
||||
node = node.parent
|
||||
|
||||
def __init__(self, name: _NameT) -> None:
|
||||
"""Initialize an empty queue.
|
||||
|
||||
This function takes time O(1).
|
||||
|
||||
Parameters:
|
||||
name: Name to assign to the new queue.
|
||||
"""
|
||||
self.name = name
|
||||
self.tree: Optional[ConcatenableQueue.BaseNode[_NameT, _ElemT]] = None
|
||||
self.first_node: Optional[ConcatenableQueue.Node[_NameT, _ElemT]]
|
||||
self.first_node = None
|
||||
self.sub_queues: list[ConcatenableQueue[_NameT, _ElemT]] = []
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Remove all elements from the queue.
|
||||
|
||||
This function takes time O(n).
|
||||
"""
|
||||
node = self.tree
|
||||
self.tree = None
|
||||
self.first_node = None
|
||||
self.sub_queues = []
|
||||
|
||||
# Wipe pointers to enable refcounted garbage collection.
|
||||
if node is not None:
|
||||
node.owner = None
|
||||
while node is not None:
|
||||
node.min_node = None # type: ignore
|
||||
prev_node = node
|
||||
if node.childs:
|
||||
node = node.childs.pop()
|
||||
else:
|
||||
node = node.parent
|
||||
prev_node.parent = None
|
||||
|
||||
def insert(self, elem: _ElemT, prio: float) -> Node[_NameT, _ElemT]:
|
||||
"""Insert an element into the empty queue.
|
||||
|
||||
This function can only be used if the queue is empty.
|
||||
Non-empty queues can grow only by merging.
|
||||
|
||||
This function takes time O(1).
|
||||
|
||||
Parameters:
|
||||
elem: Element to insert.
|
||||
prio: Initial priority of the new element.
|
||||
"""
|
||||
assert self.tree is None
|
||||
self.tree = ConcatenableQueue.Node(elem, prio)
|
||||
self.tree.owner = self
|
||||
self.first_node = self.tree
|
||||
return self.tree
|
||||
|
||||
def min_prio(self) -> float:
|
||||
"""Return the minimum priority of any element in the queue.
|
||||
|
||||
The queue must be non-empty.
|
||||
This function takes time O(1).
|
||||
"""
|
||||
node = self.tree
|
||||
assert node is not None
|
||||
return node.min_node.prio
|
||||
|
||||
def min_elem(self) -> _ElemT:
|
||||
"""Return the element with minimum priority.
|
||||
|
||||
The queue must be non-empty.
|
||||
This function takes time O(1).
|
||||
"""
|
||||
node = self.tree
|
||||
assert node is not None
|
||||
return node.min_node.data
|
||||
|
||||
def merge(self,
|
||||
sub_queues: list[ConcatenableQueue[_NameT, _ElemT]]
|
||||
) -> None:
|
||||
"""Merge the specified queues.
|
||||
|
||||
This queue must inititially be empty.
|
||||
All specified sub-queues must initially be non-empty.
|
||||
|
||||
This function removes all elements from the specified sub-queues
|
||||
and adds them to this queue.
|
||||
|
||||
After merging, this queue retains a reference to the list of
|
||||
sub-queues.
|
||||
|
||||
This function takes time O(len(sub_queues) * log(n)).
|
||||
"""
|
||||
assert self.tree is None
|
||||
assert not self.sub_queues
|
||||
assert sub_queues
|
||||
|
||||
# Keep the list of sub-queues.
|
||||
self.sub_queues = sub_queues
|
||||
|
||||
# Move the root node from the first sub-queue to this queue.
|
||||
# Clear its owner pointer.
|
||||
self.tree = sub_queues[0].tree
|
||||
self.first_node = sub_queues[0].first_node
|
||||
assert self.tree is not None
|
||||
sub_queues[0].tree = None
|
||||
self.tree.owner = None
|
||||
|
||||
# Merge remaining sub-queues.
|
||||
for sub in sub_queues[1:]:
|
||||
|
||||
# Pull the root node from the sub-queue.
|
||||
# Clear its owner pointer.
|
||||
subtree = sub.tree
|
||||
assert subtree is not None
|
||||
assert subtree.owner is sub
|
||||
subtree.owner = None
|
||||
|
||||
# Merge our current tree with the tree from the sub-queue.
|
||||
self.tree = self._join(self.tree, subtree)
|
||||
|
||||
# Put the owner pointer in the root node.
|
||||
self.tree.owner = self
|
||||
|
||||
def split(self) -> None:
|
||||
"""Undo the merge step that filled this queue.
|
||||
|
||||
Remove all elements from this queue and put them back in
|
||||
the sub-queues from which they came.
|
||||
|
||||
After splitting, this queue will be empty.
|
||||
|
||||
This function takes time O(k * log(n)).
|
||||
"""
|
||||
assert self.tree is not None
|
||||
assert self.sub_queues
|
||||
|
||||
# Clear the owner pointer from the root node.
|
||||
assert self.tree.owner is self
|
||||
self.tree.owner = None
|
||||
|
||||
# Split the tree to reconstruct each sub-queue.
|
||||
for sub in self.sub_queues[:0:-1]:
|
||||
|
||||
assert sub.first_node is not None
|
||||
(tree, rtree) = self._split_tree(sub.first_node)
|
||||
|
||||
# Assign the right tree to the sub-queue.
|
||||
sub.tree = rtree
|
||||
rtree.owner = sub
|
||||
|
||||
# Put the remaining tree in the first sub-queue.
|
||||
self.sub_queues[0].tree = tree
|
||||
tree.owner = self.sub_queues[0]
|
||||
|
||||
# Make this queue empty.
|
||||
self.tree = None
|
||||
self.first_node = None
|
||||
self.sub_queues = []
|
||||
|
||||
@staticmethod
|
||||
def _repair_node(node: BaseNode[_NameT, _ElemT]) -> None:
|
||||
"""Repair min_prio attribute of an internal node."""
|
||||
min_node = node.childs[0].min_node
|
||||
for child in node.childs[1:]:
|
||||
if child.min_node.prio < min_node.prio:
|
||||
min_node = child.min_node
|
||||
node.min_node = min_node
|
||||
|
||||
@staticmethod
|
||||
def _new_internal_node(ltree: BaseNode[_NameT, _ElemT],
|
||||
rtree: BaseNode[_NameT, _ElemT]
|
||||
) -> BaseNode[_NameT, _ElemT]:
|
||||
"""Create a new internal node with 2 child nodes."""
|
||||
assert ltree.height == rtree.height
|
||||
height = ltree.height + 1
|
||||
if ltree.min_node.prio <= rtree.min_node.prio:
|
||||
min_node = ltree.min_node
|
||||
else:
|
||||
min_node = rtree.min_node
|
||||
node = ConcatenableQueue.BaseNode(min_node, height)
|
||||
node.childs = [ltree, rtree]
|
||||
ltree.parent = node
|
||||
rtree.parent = node
|
||||
return node
|
||||
|
||||
def _join_right(self,
|
||||
ltree: BaseNode[_NameT, _ElemT],
|
||||
rtree: BaseNode[_NameT, _ElemT]
|
||||
) -> BaseNode[_NameT, _ElemT]:
|
||||
"""Join two trees together.
|
||||
|
||||
The initial left subtree must be higher than the right subtree.
|
||||
|
||||
Return the root node of the joined tree.
|
||||
"""
|
||||
|
||||
# Descend down the right spine of the left tree until we
|
||||
# reach a node just above the right tree.
|
||||
node = ltree
|
||||
while node.height > rtree.height + 1:
|
||||
node = node.childs[-1]
|
||||
|
||||
assert node.height == rtree.height + 1
|
||||
|
||||
# Find a node in the left tree to insert the right tree as child.
|
||||
while len(node.childs) == 3:
|
||||
# This node already has 3 childs so we can not add the right tree.
|
||||
# Rearrange into 2 nodes with 2 childs each, then solve it
|
||||
# at the parent level.
|
||||
#
|
||||
# N N R'
|
||||
# / | \ / \ / \
|
||||
# / | \ ---> / \ / \
|
||||
# A B C R A B C R
|
||||
#
|
||||
child = node.childs.pop()
|
||||
self._repair_node(node)
|
||||
rtree = self._new_internal_node(child, rtree)
|
||||
if node.parent is None:
|
||||
# Create a new root node.
|
||||
return self._new_internal_node(node, rtree)
|
||||
node = node.parent
|
||||
|
||||
# Insert the right tree as child of this node.
|
||||
assert len(node.childs) < 3
|
||||
node.childs.append(rtree)
|
||||
rtree.parent = node
|
||||
|
||||
# Repair min-prio pointers of ancestors.
|
||||
while True:
|
||||
self._repair_node(node)
|
||||
if node.parent is None:
|
||||
break
|
||||
node = node.parent
|
||||
|
||||
return node
|
||||
|
||||
def _join_left(self,
|
||||
ltree: BaseNode[_NameT, _ElemT],
|
||||
rtree: BaseNode[_NameT, _ElemT]
|
||||
) -> BaseNode[_NameT, _ElemT]:
|
||||
"""Join two trees together.
|
||||
|
||||
The initial left subtree must be lower than the right subtree.
|
||||
|
||||
Return the root node of the joined tree.
|
||||
"""
|
||||
|
||||
# Descend down the left spine of the right tree until we
|
||||
# reach a node just above the left tree.
|
||||
node = rtree
|
||||
while node.height > ltree.height + 1:
|
||||
node = node.childs[0]
|
||||
|
||||
assert node.height == ltree.height + 1
|
||||
|
||||
# Find a node in the right tree to insert the left tree as child.
|
||||
while len(node.childs) == 3:
|
||||
# This node already has 3 childs so we can not add the left tree.
|
||||
# Rearrange into 2 nodes with 2 childs each, then solve it
|
||||
# at the parent level.
|
||||
#
|
||||
# N L' N
|
||||
# / | \ / \ / \
|
||||
# / | \ ---> / \ / \
|
||||
# L A B C L A B C
|
||||
#
|
||||
child = node.childs.pop(0)
|
||||
self._repair_node(node)
|
||||
ltree = self._new_internal_node(ltree, child)
|
||||
if node.parent is None:
|
||||
# Create a new root node.
|
||||
return self._new_internal_node(ltree, node)
|
||||
node = node.parent
|
||||
|
||||
# Insert the left tree as child of this node.
|
||||
assert len(node.childs) < 3
|
||||
node.childs.insert(0, ltree)
|
||||
ltree.parent = node
|
||||
|
||||
# Repair min-prio pointers of ancestors.
|
||||
while True:
|
||||
self._repair_node(node)
|
||||
if node.parent is None:
|
||||
break
|
||||
node = node.parent
|
||||
|
||||
return node
|
||||
|
||||
def _join(self,
|
||||
ltree: BaseNode[_NameT, _ElemT],
|
||||
rtree: BaseNode[_NameT, _ElemT]
|
||||
) -> BaseNode[_NameT, _ElemT]:
|
||||
"""Join two trees together.
|
||||
|
||||
The left and right subtree must be consistent 2-3 trees.
|
||||
Initial parent pointers of these subtrees are ignored.
|
||||
|
||||
Return the root node of the joined tree.
|
||||
"""
|
||||
if ltree.height > rtree.height:
|
||||
return self._join_right(ltree, rtree)
|
||||
elif ltree.height < rtree.height:
|
||||
return self._join_left(ltree, rtree)
|
||||
else:
|
||||
return self._new_internal_node(ltree, rtree)
|
||||
|
||||
def _split_tree(self,
|
||||
split_node: BaseNode[_NameT, _ElemT]
|
||||
) -> tuple[BaseNode[_NameT, _ElemT],
|
||||
BaseNode[_NameT, _ElemT]]:
|
||||
"""Split a tree on a specified node.
|
||||
|
||||
Two new trees will be constructed.
|
||||
Leaf nodes to the left of "split_node" will go to the left tree.
|
||||
Leaf nodes to the right of "split_node", and "split_node" itself,
|
||||
will go to the right tree.
|
||||
|
||||
Return tuple (ltree, rtree).
|
||||
"""
|
||||
|
||||
# Detach "split_node" from its parent.
|
||||
# Assign it to the right tree.
|
||||
parent = split_node.parent
|
||||
split_node.parent = None
|
||||
|
||||
# The left tree is initially empty.
|
||||
# The right tree initially contains only "split_node".
|
||||
ltree: Optional[ConcatenableQueue.BaseNode[_NameT, _ElemT]] = None
|
||||
rtree = split_node
|
||||
|
||||
# Walk up to the root of the tree.
|
||||
# On the way up, detach each node from its parent and join its
|
||||
# child nodes to the appropriate tree.
|
||||
node = split_node
|
||||
while parent is not None:
|
||||
|
||||
# Ascend to the parent node.
|
||||
child = node
|
||||
node = parent
|
||||
parent = node.parent
|
||||
|
||||
# Detach "node" from its parent.
|
||||
node.parent = None
|
||||
|
||||
if len(node.childs) == 3:
|
||||
if node.childs[0] is child:
|
||||
# "node" has 3 child nodes.
|
||||
# Its left subtree has already been split.
|
||||
# Turn it into a 2-node and join it to the right tree.
|
||||
node.childs.pop(0)
|
||||
self._repair_node(node)
|
||||
rtree = self._join(rtree, node)
|
||||
elif node.childs[2] is child:
|
||||
# "node" has 3 child nodes.
|
||||
# Its right subtree has already been split.
|
||||
# Turn it into a 2-node and join it to the left tree.
|
||||
node.childs.pop()
|
||||
self._repair_node(node)
|
||||
if ltree is None:
|
||||
ltree = node
|
||||
else:
|
||||
ltree = self._join(node, ltree)
|
||||
else:
|
||||
# "node has 3 child nodes.
|
||||
# Its middle subtree has already been split.
|
||||
# Join its left child to the left tree, and its right
|
||||
# child to the right tree, then delete "node".
|
||||
node.childs[0].parent = None
|
||||
node.childs[2].parent = None
|
||||
if ltree is None:
|
||||
ltree = node.childs[0]
|
||||
else:
|
||||
ltree = self._join(node.childs[0], ltree)
|
||||
rtree = self._join(rtree, node.childs[2])
|
||||
|
||||
elif node.childs[0] is child:
|
||||
# "node" has 2 child nodes.
|
||||
# Its left subtree has already been split.
|
||||
# Join its right child to the right tree, then delete "node".
|
||||
node.childs[1].parent = None
|
||||
rtree = self._join(rtree, node.childs[1])
|
||||
|
||||
else:
|
||||
# "node" has 2 child nodes.
|
||||
# Its right subtree has already been split.
|
||||
# Join its left child to the left tree, then delete "node".
|
||||
node.childs[0].parent = None
|
||||
if ltree is None:
|
||||
ltree = node.childs[0]
|
||||
else:
|
||||
ltree = self._join(node.childs[0], ltree)
|
||||
|
||||
assert ltree is not None
|
||||
return (ltree, rtree)
|
||||
|
||||
|
||||
class PriorityQueue(Generic[_ElemT]):
|
||||
"""Priority queue based on a binary heap."""
|
||||
|
||||
__slots__ = ("heap", )
|
||||
|
||||
class Node(Generic[_ElemT2]):
|
||||
"""Node in the priority queue."""
|
||||
|
||||
__slots__ = ("index", "prio", "data")
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
index: int,
|
||||
prio: float,
|
||||
data: _ElemT2
|
||||
) -> None:
|
||||
self.index = index
|
||||
self.prio = prio
|
||||
self.data = data
|
||||
|
||||
def __init__(self) -> None:
|
||||
"""Initialize an empty queue."""
|
||||
self.heap: list[PriorityQueue.Node[_ElemT]] = []
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Remove all elements from the queue.
|
||||
|
||||
This function takes time O(n).
|
||||
"""
|
||||
self.heap.clear()
|
||||
|
||||
def empty(self) -> bool:
|
||||
"""Return True if the queue is empty."""
|
||||
return (not self.heap)
|
||||
|
||||
def find_min(self) -> Node[_ElemT]:
|
||||
"""Return the minimum-priority node.
|
||||
|
||||
This function takes time O(1).
|
||||
"""
|
||||
if not self.heap:
|
||||
raise IndexError("Queue is empty")
|
||||
return self.heap[0]
|
||||
|
||||
def _sift_up(self, index: int) -> None:
|
||||
"""Repair the heap along an ascending path to the root."""
|
||||
node = self.heap[index]
|
||||
prio = node.prio
|
||||
|
||||
pos = index
|
||||
while pos > 0:
|
||||
tpos = (pos - 1) // 2
|
||||
tnode = self.heap[tpos]
|
||||
if tnode.prio <= prio:
|
||||
break
|
||||
tnode.index = pos
|
||||
self.heap[pos] = tnode
|
||||
pos = tpos
|
||||
|
||||
if pos != index:
|
||||
node.index = pos
|
||||
self.heap[pos] = node
|
||||
|
||||
def _sift_down(self, index: int) -> None:
|
||||
"""Repair the heap along a descending path."""
|
||||
num_elem = len(self.heap)
|
||||
node = self.heap[index]
|
||||
prio = node.prio
|
||||
|
||||
pos = index
|
||||
while True:
|
||||
tpos = 2 * pos + 1
|
||||
if tpos >= num_elem:
|
||||
break
|
||||
tnode = self.heap[tpos]
|
||||
|
||||
qpos = tpos + 1
|
||||
if qpos < num_elem:
|
||||
qnode = self.heap[qpos]
|
||||
if qnode.prio <= tnode.prio:
|
||||
tpos = qpos
|
||||
tnode = qnode
|
||||
|
||||
if tnode.prio >= prio:
|
||||
break
|
||||
|
||||
tnode.index = pos
|
||||
self.heap[pos] = tnode
|
||||
pos = tpos
|
||||
|
||||
if pos != index:
|
||||
node.index = pos
|
||||
self.heap[pos] = node
|
||||
|
||||
def insert(self, prio: float, data: _ElemT) -> Node:
|
||||
"""Insert a new element into the queue.
|
||||
|
||||
This function takes time O(log(n)).
|
||||
|
||||
Returns:
|
||||
Node that represents the new element.
|
||||
"""
|
||||
new_index = len(self.heap)
|
||||
node = self.Node(new_index, prio, data)
|
||||
self.heap.append(node)
|
||||
self._sift_up(new_index)
|
||||
return node
|
||||
|
||||
def delete(self, elem: Node[_ElemT]) -> None:
|
||||
"""Delete the specified element from the queue.
|
||||
|
||||
This function takes time O(log(n)).
|
||||
"""
|
||||
index = elem.index
|
||||
assert self.heap[index] is elem
|
||||
|
||||
node = self.heap.pop()
|
||||
if index < len(self.heap):
|
||||
node.index = index
|
||||
self.heap[index] = node
|
||||
if node.prio < elem.prio:
|
||||
self._sift_up(index)
|
||||
elif node.prio > elem.prio:
|
||||
self._sift_down(index)
|
||||
|
||||
def decrease_prio(self, elem: Node[_ElemT], prio: float) -> None:
|
||||
"""Decrease the priority of an existing element in the queue.
|
||||
|
||||
This function takes time O(log(n)).
|
||||
"""
|
||||
assert self.heap[elem.index] is elem
|
||||
assert prio <= elem.prio
|
||||
elem.prio = prio
|
||||
self._sift_up(elem.index)
|
||||
|
||||
def increase_prio(self, elem: Node[_ElemT], prio: float) -> None:
|
||||
"""Increase the priority of an existing element in the queue.
|
||||
|
||||
This function takes time O(log(n)).
|
||||
"""
|
||||
assert self.heap[elem.index] is elem
|
||||
assert prio >= elem.prio
|
||||
elem.prio = prio
|
||||
self._sift_down(elem.index)
|
|
@ -0,0 +1,15 @@
|
|||
|
||||
[build-system]
|
||||
requires = ["setuptools >= 61.0"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "mwmatching"
|
||||
version = "3.0"
|
||||
authors = [{name = "Joris van Rantwijk"}]
|
||||
requires-python = ">= 3.7"
|
||||
classifiers = [
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Private :: Do Not Upload"
|
||||
]
|
||||
|
|
@ -18,7 +18,7 @@ from mwmatching import (maximum_weight_matching,
|
|||
adjust_weights_for_maximum_cardinality_matching)
|
||||
|
||||
|
||||
def parse_int_or_float(s: str) -> int|float:
|
||||
def parse_int_or_float(s: str) -> float:
|
||||
"""Convert a string to integer or float value."""
|
||||
try:
|
||||
return int(s)
|
||||
|
@ -27,7 +27,7 @@ def parse_int_or_float(s: str) -> int|float:
|
|||
return float(s)
|
||||
|
||||
|
||||
def read_dimacs_graph(f: TextIO) -> list[tuple[int, int, int|float]]:
|
||||
def read_dimacs_graph(f: TextIO) -> list[tuple[int, int, float]]:
|
||||
"""Read a graph in DIMACS edge list format."""
|
||||
|
||||
edges: list[tuple[int, int, float]] = []
|
||||
|
@ -70,7 +70,7 @@ def read_dimacs_graph(f: TextIO) -> list[tuple[int, int, int|float]]:
|
|||
return edges
|
||||
|
||||
|
||||
def read_dimacs_graph_file(filename: str) -> list[tuple[int, int, int|float]]:
|
||||
def read_dimacs_graph_file(filename: str) -> list[tuple[int, int, float]]:
|
||||
"""Read a graph from file or stdin."""
|
||||
if filename:
|
||||
with open(filename, "r", encoding="ascii") as f:
|
||||
|
@ -87,11 +87,11 @@ def read_dimacs_graph_file(filename: str) -> list[tuple[int, int, int|float]]:
|
|||
|
||||
def read_dimacs_matching(
|
||||
f: TextIO
|
||||
) -> tuple[int|float, list[tuple[int, int]]]:
|
||||
) -> tuple[float, list[tuple[int, int]]]:
|
||||
"""Read a matching solution in DIMACS format."""
|
||||
|
||||
have_weight = False
|
||||
weight: int|float = 0
|
||||
weight: float = 0
|
||||
pairs: list[tuple[int, int]] = []
|
||||
|
||||
for line in f:
|
||||
|
@ -138,7 +138,7 @@ def read_dimacs_matching(
|
|||
|
||||
def read_dimacs_matching_file(
|
||||
filename: str
|
||||
) -> tuple[int|float, list[tuple[int, int]]]:
|
||||
) -> tuple[float, list[tuple[int, int]]]:
|
||||
"""Read a matching from file."""
|
||||
with open(filename, "r", encoding="ascii") as f:
|
||||
try:
|
||||
|
@ -149,7 +149,7 @@ def read_dimacs_matching_file(
|
|||
|
||||
def write_dimacs_matching(
|
||||
f: TextIO,
|
||||
weight: int|float,
|
||||
weight: float,
|
||||
pairs: list[tuple[int, int]]
|
||||
) -> None:
|
||||
"""Write a matching solution in DIMACS format."""
|
||||
|
@ -165,7 +165,7 @@ def write_dimacs_matching(
|
|||
|
||||
def write_dimacs_matching_file(
|
||||
filename: str,
|
||||
weight: int|float,
|
||||
weight: float,
|
||||
pairs: list[tuple[int, int]]
|
||||
) -> None:
|
||||
"""Write a matching to file or stdout."""
|
||||
|
@ -177,15 +177,15 @@ def write_dimacs_matching_file(
|
|||
|
||||
|
||||
def calc_matching_weight(
|
||||
edges: list[tuple[int, int, int|float]],
|
||||
edges: list[tuple[int, int, float]],
|
||||
pairs: list[tuple[int, int]]
|
||||
) -> int|float:
|
||||
) -> float:
|
||||
"""Verify that the matching is valid and calculate its weight.
|
||||
|
||||
Matched pairs are assumed to be in the same order as edges.
|
||||
"""
|
||||
|
||||
weight: int|float = 0
|
||||
weight: float = 0
|
||||
|
||||
edge_pos = 0
|
||||
for pair in pairs:
|
||||
|
@ -267,7 +267,7 @@ def verify_matching(filename: str, maxcard: bool, wfactor: float) -> bool:
|
|||
edges = read_dimacs_graph_file(filename)
|
||||
(gold_weight, gold_pairs) = read_dimacs_matching_file(matching_filename)
|
||||
|
||||
edges_adj: Sequence[tuple[int, int, int|float]] = edges
|
||||
edges_adj: Sequence[tuple[int, int, float]] = edges
|
||||
|
||||
if wfactor != 1.0:
|
||||
if wfactor.is_integer():
|
||||
|
|
|
@ -4,10 +4,12 @@ import math
|
|||
import unittest
|
||||
from unittest.mock import Mock
|
||||
|
||||
import mwmatching
|
||||
from mwmatching import (
|
||||
maximum_weight_matching as mwm,
|
||||
adjust_weights_for_maximum_cardinality_matching as adj)
|
||||
from mwmatching.algorithm import (
|
||||
MatchingError, GraphInfo, Blossom, NonTrivialBlossom, MatchingContext,
|
||||
verify_optimum)
|
||||
|
||||
|
||||
class TestMaximumWeightMatching(unittest.TestCase):
|
||||
|
@ -243,6 +245,35 @@ class TestMaximumWeightMatching(unittest.TestCase):
|
|||
mwm([(1,2,19), (1,4,17), (1,5,19), (2,3,15), (2,5,21), (4,6,18), (4,7,11), (5,6,19)]),
|
||||
[(1,5), (2,3), (4,6)])
|
||||
|
||||
def test52_augment_blossom_nested2(self):
|
||||
"""augment nested blossoms"""
|
||||
#
|
||||
# [4]--15 19--[2]
|
||||
# | \ / |
|
||||
# 16 [1] 21
|
||||
# | / \ |
|
||||
# [5]--17 19--[3]
|
||||
# |
|
||||
# 10
|
||||
# |
|
||||
# [6]
|
||||
#
|
||||
self.assertEqual(
|
||||
mwm([(1,2,19), (1,3,19), (1,4,15), (1,5,17), (2,3,21), (4,5,16), (5,6,10)]),
|
||||
[(1,4), (2,3), (5,6)])
|
||||
|
||||
def test61_triangles_n9(self):
|
||||
"""t.f 9 nodes"""
|
||||
#
|
||||
# [1]------[4] [7]
|
||||
# | \ | \ | \
|
||||
# | [3] | [6] | [9]
|
||||
# | / | / | /
|
||||
# [2] [5]------[8]
|
||||
#
|
||||
result = mwm([(1,2,1), (1,3,1), (2,3,1), (4,5,1), (4,6,1), (5,6,1), (7,8,1), (7,9,1), (8,9,1), (1,4,1), (5,8,1)])
|
||||
self.assertEqual(len(result), 4)
|
||||
|
||||
def test_fail_bad_input(self):
|
||||
"""bad input values"""
|
||||
with self.assertRaises(TypeError):
|
||||
|
@ -402,10 +433,10 @@ class TestMaximumCardinalityMatching(unittest.TestCase):
|
|||
|
||||
|
||||
class TestGraphInfo(unittest.TestCase):
|
||||
"""Test _GraphInfo helper class."""
|
||||
"""Test GraphInfo helper class."""
|
||||
|
||||
def test_empty(self):
|
||||
graph = mwmatching._GraphInfo([])
|
||||
graph = GraphInfo([])
|
||||
self.assertEqual(graph.num_vertex, 0)
|
||||
self.assertEqual(graph.edges, [])
|
||||
self.assertEqual(graph.adjacent_edges, [])
|
||||
|
@ -420,8 +451,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate,
|
||||
vertex_dual_2x,
|
||||
nontrivial_blossom):
|
||||
ctx = Mock(spec=mwmatching._MatchingContext)
|
||||
ctx.graph = mwmatching._GraphInfo(edges)
|
||||
ctx = Mock(spec=MatchingContext)
|
||||
ctx.graph = GraphInfo(edges)
|
||||
ctx.vertex_mate = vertex_mate
|
||||
ctx.vertex_dual_2x = vertex_dual_2x
|
||||
ctx.nontrivial_blossom = nontrivial_blossom
|
||||
|
@ -434,7 +465,7 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate=[-1, 2, 1],
|
||||
vertex_dual_2x=[0, 20, 2],
|
||||
nontrivial_blossom=[])
|
||||
mwmatching._verify_optimum(ctx)
|
||||
verify_optimum(ctx)
|
||||
|
||||
def test_asymmetric_matching(self):
|
||||
edges = [(0,1,10), (1,2,11)]
|
||||
|
@ -443,8 +474,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate=[-1, 2, 0],
|
||||
vertex_dual_2x=[0, 20, 2],
|
||||
nontrivial_blossom=[])
|
||||
with self.assertRaises(mwmatching.MatchingError):
|
||||
mwmatching._verify_optimum(ctx)
|
||||
with self.assertRaises(MatchingError):
|
||||
verify_optimum(ctx)
|
||||
|
||||
def test_nonexistent_matched_edge(self):
|
||||
edges = [(0,1,10), (1,2,11)]
|
||||
|
@ -453,8 +484,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate=[2, -1, 0],
|
||||
vertex_dual_2x=[11, 11, 11],
|
||||
nontrivial_blossom=[])
|
||||
with self.assertRaises(mwmatching.MatchingError):
|
||||
mwmatching._verify_optimum(ctx)
|
||||
with self.assertRaises(MatchingError):
|
||||
verify_optimum(ctx)
|
||||
|
||||
def test_negative_vertex_dual(self):
|
||||
edges = [(0,1,10), (1,2,11)]
|
||||
|
@ -463,8 +494,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate=[-1, 2, 1],
|
||||
vertex_dual_2x=[-2, 22, 0],
|
||||
nontrivial_blossom=[])
|
||||
with self.assertRaises(mwmatching.MatchingError):
|
||||
mwmatching._verify_optimum(ctx)
|
||||
with self.assertRaises(MatchingError):
|
||||
verify_optimum(ctx)
|
||||
|
||||
def test_unmatched_nonzero_dual(self):
|
||||
edges = [(0,1,10), (1,2,11)]
|
||||
|
@ -473,8 +504,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate=[-1, 2, 1],
|
||||
vertex_dual_2x=[9, 11, 11],
|
||||
nontrivial_blossom=[])
|
||||
with self.assertRaises(mwmatching.MatchingError):
|
||||
mwmatching._verify_optimum(ctx)
|
||||
with self.assertRaises(MatchingError):
|
||||
verify_optimum(ctx)
|
||||
|
||||
def test_negative_edge_slack(self):
|
||||
edges = [(0,1,10), (1,2,11)]
|
||||
|
@ -483,8 +514,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate=[-1, 2, 1],
|
||||
vertex_dual_2x=[0, 11, 11],
|
||||
nontrivial_blossom=[])
|
||||
with self.assertRaises(mwmatching.MatchingError):
|
||||
mwmatching._verify_optimum(ctx)
|
||||
with self.assertRaises(MatchingError):
|
||||
verify_optimum(ctx)
|
||||
|
||||
def test_matched_edge_slack(self):
|
||||
edges = [(0,1,10), (1,2,11)]
|
||||
|
@ -493,8 +524,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate=[-1, 2, 1],
|
||||
vertex_dual_2x=[0, 20, 11],
|
||||
nontrivial_blossom=[])
|
||||
with self.assertRaises(mwmatching.MatchingError):
|
||||
mwmatching._verify_optimum(ctx)
|
||||
with self.assertRaises(MatchingError):
|
||||
verify_optimum(ctx)
|
||||
|
||||
def test_negative_blossom_dual(self):
|
||||
#
|
||||
|
@ -503,11 +534,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
# \----8-----/
|
||||
#
|
||||
edges = [(0,1,7), (0,2,8), (1,2,9), (2,3,6)]
|
||||
blossom = mwmatching._NonTrivialBlossom(
|
||||
subblossoms=[
|
||||
mwmatching._Blossom(0),
|
||||
mwmatching._Blossom(1),
|
||||
mwmatching._Blossom(2)],
|
||||
blossom = NonTrivialBlossom(
|
||||
subblossoms=[Blossom(0), Blossom(1), Blossom(2)],
|
||||
edges=[0,2,1])
|
||||
for sub in blossom.subblossoms:
|
||||
sub.parent = blossom
|
||||
|
@ -517,8 +545,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate=[1, 0, 3, 2],
|
||||
vertex_dual_2x=[4, 6, 8, 4],
|
||||
nontrivial_blossom=[blossom])
|
||||
with self.assertRaises(mwmatching.MatchingError):
|
||||
mwmatching._verify_optimum(ctx)
|
||||
with self.assertRaises(MatchingError):
|
||||
verify_optimum(ctx)
|
||||
|
||||
def test_blossom_not_full(self):
|
||||
#
|
||||
|
@ -531,11 +559,8 @@ class TestVerificationFail(unittest.TestCase):
|
|||
# \----2-----/
|
||||
#
|
||||
edges = [(0,1,7), (0,2,2), (1,2,5), (0,3,8), (1,4,8)]
|
||||
blossom = mwmatching._NonTrivialBlossom(
|
||||
subblossoms=[
|
||||
mwmatching._Blossom(0),
|
||||
mwmatching._Blossom(1),
|
||||
mwmatching._Blossom(2)],
|
||||
blossom = NonTrivialBlossom(
|
||||
subblossoms=[Blossom(0), Blossom(1), Blossom(2)],
|
||||
edges=[0,2,1])
|
||||
for sub in blossom.subblossoms:
|
||||
sub.parent = blossom
|
||||
|
@ -545,8 +570,62 @@ class TestVerificationFail(unittest.TestCase):
|
|||
vertex_mate=[3, 4, -1, 0, 1],
|
||||
vertex_dual_2x=[4, 10, 0, 12, 6],
|
||||
nontrivial_blossom=[blossom])
|
||||
with self.assertRaises(mwmatching.MatchingError):
|
||||
mwmatching._verify_optimum(ctx)
|
||||
with self.assertRaises(MatchingError):
|
||||
verify_optimum(ctx)
|
||||
|
||||
|
||||
class TestValueRange(unittest.TestCase):
|
||||
"""Test graphs that force big values for dual variables or edge slack."""
|
||||
|
||||
def test_big_blossom_dual(self):
|
||||
"""Force modified blossom dual close to 2*maxweight."""
|
||||
#
|
||||
# [3]
|
||||
# / \
|
||||
# W-4/ \W
|
||||
# 7 / \
|
||||
# [1]-----[2]-----[4]
|
||||
# | W-4
|
||||
# 5|
|
||||
# | 1
|
||||
# [5]-----[6]
|
||||
#
|
||||
w = 100000
|
||||
pairs = mwm([(1,2,7), (2,3,w-4), (2,4,w-4), (2,5,5), (3,4,w), (5,6,1)])
|
||||
self.assertEqual(pairs, [(1,2), (3,4), (5,6)])
|
||||
|
||||
def test_negative_blossom_dual(self):
|
||||
"""Force modified blossom dual close to -maxweight."""
|
||||
#
|
||||
# [3]
|
||||
# / \
|
||||
# 5/ \7
|
||||
# 1 / \
|
||||
# [1]-----[2]-----[4]
|
||||
# | 5
|
||||
# 1|
|
||||
# | W
|
||||
# [5]-----[6]
|
||||
#
|
||||
w = 100000
|
||||
pairs = mwm([(1,2,1), (2,3,5), (2,4,5), (2,5,1), (3,4,7), (5,6,w)])
|
||||
self.assertEqual(pairs, [(1,2), (3,4), (5,6)])
|
||||
|
||||
def test_big_edge_slack(self):
|
||||
"""Force modified edge slack close to 3*maxweight."""
|
||||
#
|
||||
# 6 W W-2 5
|
||||
# [1]-----[2]-----[3]-----[4]-----[5]
|
||||
# | |
|
||||
# |1 |1
|
||||
# | |
|
||||
# [6]-----[7]-----[8]-----[9]-----[10]
|
||||
# 6 W W-2 5
|
||||
#
|
||||
w = 100000
|
||||
pairs = mwm([(1,2,6), (1,6,1), (2,3,w), (3,4,w-2), (3,8,1), (4,5,5),
|
||||
(6,7,6), (7,8,w), (8,9,w-2), (9,10,5)])
|
||||
self.assertEqual(pairs, [(1,6), (2,3), (4,5), (7,8), (9,10)])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
|
@ -0,0 +1,525 @@
|
|||
"""Unit tests for data structures."""
|
||||
|
||||
import random
|
||||
import unittest
|
||||
|
||||
from mwmatching.datastruct import ConcatenableQueue, PriorityQueue
|
||||
|
||||
|
||||
class TestConcatenableQueue(unittest.TestCase):
|
||||
"""Test ConcatenableQueue."""
|
||||
|
||||
def _check_tree(self, queue):
|
||||
"""Check tree balancing rules and priority info."""
|
||||
|
||||
self.assertIsNone(queue.tree.parent)
|
||||
self.assertIs(queue.tree.owner, queue)
|
||||
|
||||
nodes = [queue.tree]
|
||||
while nodes:
|
||||
|
||||
node = nodes.pop()
|
||||
|
||||
if node is not queue.tree:
|
||||
self.assertIsNone(node.owner)
|
||||
|
||||
if node.height == 0:
|
||||
self.assertEqual(len(node.childs), 0)
|
||||
self.assertIs(node.min_node, node)
|
||||
else:
|
||||
self.assertIn(len(node.childs), (2, 3))
|
||||
best_node = set()
|
||||
best_prio = None
|
||||
for child in node.childs:
|
||||
self.assertIs(child.parent, node)
|
||||
self.assertEqual(child.height, node.height - 1)
|
||||
nodes.append(child)
|
||||
if ((best_prio is None)
|
||||
or (child.min_node.prio < best_prio)):
|
||||
best_node = {child.min_node}
|
||||
best_prio = child.min_node.prio
|
||||
elif child.min_node.prio == best_prio:
|
||||
best_node.add(child.min_node)
|
||||
|
||||
self.assertEqual(node.min_node.prio, best_prio)
|
||||
self.assertIn(node.min_node, best_node)
|
||||
|
||||
def test_single(self):
|
||||
"""Single element."""
|
||||
q = ConcatenableQueue("Q")
|
||||
|
||||
with self.assertRaises(Exception):
|
||||
q.min_prio()
|
||||
|
||||
with self.assertRaises(Exception):
|
||||
q.min_elem()
|
||||
|
||||
n = q.insert("a", 4)
|
||||
self.assertIsInstance(n, ConcatenableQueue.Node)
|
||||
|
||||
self._check_tree(q)
|
||||
|
||||
self.assertEqual(n.find(), "Q")
|
||||
self.assertEqual(q.min_prio(), 4)
|
||||
self.assertEqual(q.min_elem(), "a")
|
||||
|
||||
with self.assertRaises(Exception):
|
||||
q.insert("x", 1)
|
||||
|
||||
n.set_prio(8)
|
||||
self._check_tree(q)
|
||||
|
||||
self.assertEqual(n.find(), "Q")
|
||||
self.assertEqual(q.min_prio(), 8)
|
||||
self.assertEqual(q.min_elem(), "a")
|
||||
|
||||
q.clear()
|
||||
|
||||
def test_simple(self):
|
||||
"""Simple test, 5 elements."""
|
||||
q1 = ConcatenableQueue("A")
|
||||
n1 = q1.insert("a", 5)
|
||||
|
||||
q2 = ConcatenableQueue("B")
|
||||
n2 = q2.insert("b", 6)
|
||||
|
||||
q3 = ConcatenableQueue("C")
|
||||
n3 = q3.insert("c", 7)
|
||||
|
||||
q4 = ConcatenableQueue("D")
|
||||
n4 = q4.insert("d", 4)
|
||||
|
||||
q5 = ConcatenableQueue("E")
|
||||
n5 = q5.insert("e", 3)
|
||||
|
||||
q345 = ConcatenableQueue("P")
|
||||
q345.merge([q3, q4, q5])
|
||||
self._check_tree(q345)
|
||||
|
||||
self.assertEqual(n1.find(), "A")
|
||||
self.assertEqual(n3.find(), "P")
|
||||
self.assertEqual(n4.find(), "P")
|
||||
self.assertEqual(n5.find(), "P")
|
||||
self.assertEqual(q345.min_prio(), 3)
|
||||
self.assertEqual(q345.min_elem(), "e")
|
||||
|
||||
with self.assertRaises(Exception):
|
||||
q3.min_prio()
|
||||
|
||||
self._check_tree(q345)
|
||||
n5.set_prio(6)
|
||||
self._check_tree(q345)
|
||||
|
||||
self.assertEqual(q345.min_prio(), 4)
|
||||
self.assertEqual(q345.min_elem(), "d")
|
||||
|
||||
q12 = ConcatenableQueue("Q")
|
||||
q12.merge([q1, q2])
|
||||
self._check_tree(q12)
|
||||
|
||||
self.assertEqual(n1.find(), "Q")
|
||||
self.assertEqual(n2.find(), "Q")
|
||||
self.assertEqual(q12.min_prio(), 5)
|
||||
self.assertEqual(q12.min_elem(), "a")
|
||||
|
||||
q12345 = ConcatenableQueue("R")
|
||||
q12345.merge([q12, q345])
|
||||
self._check_tree(q12345)
|
||||
|
||||
self.assertEqual(n1.find(), "R")
|
||||
self.assertEqual(n2.find(), "R")
|
||||
self.assertEqual(n3.find(), "R")
|
||||
self.assertEqual(n4.find(), "R")
|
||||
self.assertEqual(n5.find(), "R")
|
||||
self.assertEqual(q12345.min_prio(), 4)
|
||||
self.assertEqual(q12345.min_elem(), "d")
|
||||
|
||||
n4.set_prio(8)
|
||||
self._check_tree(q12345)
|
||||
|
||||
self.assertEqual(q12345.min_prio(), 5)
|
||||
self.assertEqual(q12345.min_elem(), "a")
|
||||
|
||||
n3.set_prio(2)
|
||||
self._check_tree(q12345)
|
||||
|
||||
self.assertEqual(q12345.min_prio(), 2)
|
||||
self.assertEqual(q12345.min_elem(), "c")
|
||||
|
||||
q12345.split()
|
||||
self._check_tree(q12)
|
||||
self._check_tree(q345)
|
||||
|
||||
self.assertEqual(n1.find(), "Q")
|
||||
self.assertEqual(n2.find(), "Q")
|
||||
self.assertEqual(n3.find(), "P")
|
||||
self.assertEqual(n4.find(), "P")
|
||||
self.assertEqual(n5.find(), "P")
|
||||
self.assertEqual(q12.min_prio(), 5)
|
||||
self.assertEqual(q12.min_elem(), "a")
|
||||
self.assertEqual(q345.min_prio(), 2)
|
||||
self.assertEqual(q345.min_elem(), "c")
|
||||
|
||||
q12.split()
|
||||
self._check_tree(q1)
|
||||
self._check_tree(q2)
|
||||
|
||||
q345.split()
|
||||
self._check_tree(q3)
|
||||
self._check_tree(q4)
|
||||
self._check_tree(q5)
|
||||
|
||||
self.assertEqual(n1.find(), "A")
|
||||
self.assertEqual(n2.find(), "B")
|
||||
self.assertEqual(n3.find(), "C")
|
||||
self.assertEqual(n4.find(), "D")
|
||||
self.assertEqual(n5.find(), "E")
|
||||
self.assertEqual(q3.min_prio(), 2)
|
||||
self.assertEqual(q3.min_elem(), "c")
|
||||
|
||||
q1.clear()
|
||||
q2.clear()
|
||||
q3.clear()
|
||||
q4.clear()
|
||||
q5.clear()
|
||||
q12.clear()
|
||||
q345.clear()
|
||||
q12345.clear()
|
||||
|
||||
def test_medium(self):
|
||||
"""Medium test, 14 elements."""
|
||||
|
||||
prios = [3, 8, 6, 2, 9, 4, 6, 8, 1, 5, 9, 4, 7, 8]
|
||||
|
||||
queues = []
|
||||
nodes = []
|
||||
for i in range(14):
|
||||
q = ConcatenableQueue(chr(ord("A") + i))
|
||||
n = q.insert(chr(ord("a") + i), prios[i])
|
||||
queues.append(q)
|
||||
nodes.append(n)
|
||||
|
||||
q = ConcatenableQueue("AB")
|
||||
q.merge(queues[0:2])
|
||||
queues.append(q)
|
||||
self._check_tree(q)
|
||||
self.assertEqual(q.min_prio(), min(prios[0:2]))
|
||||
|
||||
q = ConcatenableQueue("CDE")
|
||||
q.merge(queues[2:5])
|
||||
queues.append(q)
|
||||
self._check_tree(q)
|
||||
self.assertEqual(q.min_prio(), min(prios[2:5]))
|
||||
|
||||
q = ConcatenableQueue("FGHI")
|
||||
q.merge(queues[5:9])
|
||||
queues.append(q)
|
||||
self._check_tree(q)
|
||||
self.assertEqual(q.min_prio(), min(prios[5:9]))
|
||||
|
||||
q = ConcatenableQueue("JKLMN")
|
||||
q.merge(queues[9:14])
|
||||
queues.append(q)
|
||||
self._check_tree(q)
|
||||
self.assertEqual(q.min_prio(), min(prios[9:14]))
|
||||
|
||||
for i in range(0, 2):
|
||||
self.assertEqual(nodes[i].find(), "AB")
|
||||
for i in range(2, 5):
|
||||
self.assertEqual(nodes[i].find(), "CDE")
|
||||
for i in range(5, 9):
|
||||
self.assertEqual(nodes[i].find(), "FGHI")
|
||||
for i in range(9, 14):
|
||||
self.assertEqual(nodes[i].find(), "JKLMN")
|
||||
|
||||
q = ConcatenableQueue("ALL")
|
||||
q.merge(queues[14:18])
|
||||
queues.append(q)
|
||||
self._check_tree(q)
|
||||
self.assertEqual(q.min_prio(), 1)
|
||||
self.assertEqual(q.min_elem(), "i")
|
||||
|
||||
for i in range(14):
|
||||
self.assertEqual(nodes[i].find(), "ALL")
|
||||
|
||||
prios[8] = 5
|
||||
nodes[8].set_prio(prios[8])
|
||||
self.assertEqual(q.min_prio(), 2)
|
||||
self.assertEqual(q.min_elem(), "d")
|
||||
|
||||
q.split()
|
||||
|
||||
for i in range(0, 2):
|
||||
self.assertEqual(nodes[i].find(), "AB")
|
||||
for i in range(2, 5):
|
||||
self.assertEqual(nodes[i].find(), "CDE")
|
||||
for i in range(5, 9):
|
||||
self.assertEqual(nodes[i].find(), "FGHI")
|
||||
for i in range(9, 14):
|
||||
self.assertEqual(nodes[i].find(), "JKLMN")
|
||||
|
||||
self.assertEqual(queues[14].min_prio(), min(prios[0:2]))
|
||||
self.assertEqual(queues[15].min_prio(), min(prios[2:5]))
|
||||
self.assertEqual(queues[16].min_prio(), min(prios[5:9]))
|
||||
self.assertEqual(queues[17].min_prio(), min(prios[9:14]))
|
||||
|
||||
for q in queues[14:18]:
|
||||
self._check_tree(q)
|
||||
q.split()
|
||||
|
||||
for i in range(14):
|
||||
self._check_tree(queues[i])
|
||||
self.assertEqual(nodes[i].find(), chr(ord("A") + i))
|
||||
self.assertEqual(queues[i].min_prio(), prios[i])
|
||||
self.assertEqual(queues[i].min_elem(), chr(ord("a") + i))
|
||||
|
||||
for q in queues:
|
||||
q.clear()
|
||||
|
||||
def test_random(self):
|
||||
"""Pseudo-random test."""
|
||||
|
||||
rng = random.Random(23456)
|
||||
|
||||
nodes = []
|
||||
prios = []
|
||||
queues = {}
|
||||
queue_nodes = {}
|
||||
queue_subs = {}
|
||||
live_queues = set()
|
||||
live_merged_queues = set()
|
||||
|
||||
for i in range(4000):
|
||||
name = f"q{i}"
|
||||
q = ConcatenableQueue(name)
|
||||
p = rng.random()
|
||||
n = q.insert(f"n{i}", p)
|
||||
nodes.append(n)
|
||||
prios.append(p)
|
||||
queues[name] = q
|
||||
queue_nodes[name] = {i}
|
||||
live_queues.add(name)
|
||||
|
||||
for i in range(2000):
|
||||
|
||||
for k in range(10):
|
||||
t = rng.randint(0, len(nodes) - 1)
|
||||
name = nodes[t].find()
|
||||
self.assertIn(name, live_queues)
|
||||
self.assertIn(t, queue_nodes[name])
|
||||
p = rng.random()
|
||||
prios[t] = p
|
||||
nodes[t].set_prio(p)
|
||||
pp = min(prios[tt] for tt in queue_nodes[name])
|
||||
tt = prios.index(pp)
|
||||
self.assertEqual(queues[name].min_prio(), pp)
|
||||
self.assertEqual(queues[name].min_elem(), f"n{tt}")
|
||||
|
||||
k = rng.randint(2, max(2, len(live_queues) // 2 - 400))
|
||||
subs = rng.sample(sorted(live_queues), k)
|
||||
|
||||
name = f"Q{i}"
|
||||
q = ConcatenableQueue(name)
|
||||
q.merge([queues[nn] for nn in subs])
|
||||
self._check_tree(q)
|
||||
queues[name] = q
|
||||
queue_nodes[name] = set().union(*(queue_nodes[nn] for nn in subs))
|
||||
queue_subs[name] = set(subs)
|
||||
live_queues.difference_update(subs)
|
||||
live_merged_queues.difference_update(subs)
|
||||
live_queues.add(name)
|
||||
live_merged_queues.add(name)
|
||||
|
||||
pp = min(prios[tt] for tt in queue_nodes[name])
|
||||
tt = prios.index(pp)
|
||||
self.assertEqual(q.min_prio(), pp)
|
||||
self.assertEqual(q.min_elem(), f"n{tt}")
|
||||
|
||||
if len(live_merged_queues) >= 100:
|
||||
name = rng.choice(sorted(live_merged_queues))
|
||||
queues[name].split()
|
||||
|
||||
for nn in queue_subs[name]:
|
||||
self._check_tree(queues[nn])
|
||||
pp = min(prios[tt] for tt in queue_nodes[nn])
|
||||
tt = prios.index(pp)
|
||||
self.assertEqual(queues[nn].min_prio(), pp)
|
||||
self.assertEqual(queues[nn].min_elem(), f"n{tt}")
|
||||
live_queues.add(nn)
|
||||
if nn in queue_subs:
|
||||
live_merged_queues.add(nn)
|
||||
|
||||
live_merged_queues.remove(name)
|
||||
live_queues.remove(name)
|
||||
|
||||
del queues[name]
|
||||
del queue_nodes[name]
|
||||
del queue_subs[name]
|
||||
|
||||
for q in queues.values():
|
||||
q.clear()
|
||||
|
||||
|
||||
class TestPriorityQueue(unittest.TestCase):
|
||||
"""Test PriorityQueue."""
|
||||
|
||||
def test_empty(self):
|
||||
"""Empty queue."""
|
||||
q = PriorityQueue()
|
||||
self.assertTrue(q.empty())
|
||||
with self.assertRaises(IndexError):
|
||||
q.find_min()
|
||||
|
||||
def test_single(self):
|
||||
"""Single element."""
|
||||
q = PriorityQueue()
|
||||
|
||||
n1 = q.insert(5, "a")
|
||||
self.assertEqual(n1.prio, 5)
|
||||
self.assertEqual(n1.data, "a")
|
||||
self.assertFalse(q.empty())
|
||||
self.assertIs(q.find_min(), n1)
|
||||
|
||||
q.decrease_prio(n1, 3)
|
||||
self.assertEqual(n1.prio, 3)
|
||||
self.assertIs(q.find_min(), n1)
|
||||
|
||||
q.delete(n1)
|
||||
self.assertTrue(q.empty())
|
||||
|
||||
def test_simple(self):
|
||||
"""A few elements."""
|
||||
prios = [9, 4, 7, 5, 8, 6, 4, 5, 2, 6]
|
||||
labels = "abcdefghij"
|
||||
|
||||
q = PriorityQueue()
|
||||
|
||||
elems = [q.insert(prio, data) for (prio, data) in zip(prios, labels)]
|
||||
for (n, prio, data) in zip(elems, prios, labels):
|
||||
self.assertEqual(n.prio, prio)
|
||||
self.assertEqual(n.data, data)
|
||||
|
||||
self.assertIs(q.find_min(), elems[8])
|
||||
|
||||
q.decrease_prio(elems[2], 1)
|
||||
self.assertIs(q.find_min(), elems[2])
|
||||
|
||||
q.decrease_prio(elems[4], 3)
|
||||
self.assertIs(q.find_min(), elems[2])
|
||||
|
||||
q.delete(elems[2])
|
||||
self.assertIs(q.find_min(), elems[8])
|
||||
|
||||
q.delete(elems[8])
|
||||
self.assertIs(q.find_min(), elems[4])
|
||||
|
||||
q.delete(elems[4])
|
||||
q.delete(elems[1])
|
||||
self.assertIs(q.find_min(), elems[6])
|
||||
|
||||
q.delete(elems[3])
|
||||
q.delete(elems[9])
|
||||
self.assertIs(q.find_min(), elems[6])
|
||||
|
||||
q.delete(elems[6])
|
||||
self.assertIs(q.find_min(), elems[7])
|
||||
|
||||
q.delete(elems[7])
|
||||
self.assertIs(q.find_min(), elems[5])
|
||||
|
||||
self.assertFalse(q.empty())
|
||||
q.clear()
|
||||
self.assertTrue(q.empty())
|
||||
|
||||
def test_increase_prio(self):
|
||||
"""Increase priority of existing element."""
|
||||
|
||||
q = PriorityQueue()
|
||||
|
||||
n1 = q.insert(5, "a")
|
||||
q.increase_prio(n1, 8)
|
||||
self.assertEqual(n1.prio, 8)
|
||||
self.assertIs(q.find_min(), n1)
|
||||
|
||||
q = PriorityQueue()
|
||||
n1 = q.insert(9, "a")
|
||||
n2 = q.insert(4, "b")
|
||||
n3 = q.insert(7, "c")
|
||||
n4 = q.insert(5, "d")
|
||||
self.assertIs(q.find_min(), n2)
|
||||
|
||||
q.increase_prio(n2, 8)
|
||||
self.assertEqual(n2.prio, 8)
|
||||
self.assertIs(q.find_min(), n4)
|
||||
|
||||
q.increase_prio(n3, 10)
|
||||
self.assertEqual(n3.prio, 10)
|
||||
self.assertIs(q.find_min(), n4)
|
||||
|
||||
q.delete(n4)
|
||||
self.assertIs(q.find_min(), n2)
|
||||
|
||||
q.delete(n2)
|
||||
self.assertIs(q.find_min(), n1)
|
||||
|
||||
q.delete(n1)
|
||||
self.assertIs(q.find_min(), n3)
|
||||
self.assertEqual(n3.prio, 10)
|
||||
|
||||
q.delete(n3)
|
||||
self.assertTrue(q.empty())
|
||||
|
||||
def test_random(self):
|
||||
"""Pseudo-random test."""
|
||||
rng = random.Random(34567)
|
||||
|
||||
num_elem = 1000
|
||||
|
||||
seq = 0
|
||||
elems = []
|
||||
q = PriorityQueue()
|
||||
|
||||
def check():
|
||||
min_prio = min(prio for (n, prio, data) in elems)
|
||||
m = q.find_min()
|
||||
self.assertIn((m, m.prio, m.data), elems)
|
||||
self.assertEqual(m.prio, min_prio)
|
||||
|
||||
for i in range(num_elem):
|
||||
seq += 1
|
||||
prio = rng.randint(0, 1000000)
|
||||
elems.append((q.insert(prio, seq), prio, seq))
|
||||
check()
|
||||
|
||||
for i in range(10000):
|
||||
p = rng.randint(0, num_elem - 1)
|
||||
prio = rng.randint(0, 1000000)
|
||||
if prio <= elems[p][1]:
|
||||
q.decrease_prio(elems[p][0], prio)
|
||||
else:
|
||||
q.increase_prio(elems[p][0], prio)
|
||||
elems[p] = (elems[p][0], prio, elems[p][2])
|
||||
check()
|
||||
|
||||
p = rng.randint(0, num_elem - 1)
|
||||
q.delete(elems[p][0])
|
||||
elems.pop(p)
|
||||
check()
|
||||
|
||||
seq += 1
|
||||
prio = rng.randint(0, 1000000)
|
||||
elems.append((q.insert(prio, seq), prio, seq))
|
||||
check()
|
||||
|
||||
for i in range(num_elem):
|
||||
p = rng.randint(0, num_elem - 1 - i)
|
||||
q.delete(elems[p][0])
|
||||
elems.pop(p)
|
||||
if elems:
|
||||
check()
|
||||
|
||||
self.assertTrue(q.empty())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
|
@ -1,22 +0,0 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
echo
|
||||
echo "Running mypy"
|
||||
mypy --disallow-incomplete-defs python tests
|
||||
|
||||
echo
|
||||
echo "Running pylint"
|
||||
pylint --ignore=test_mwmatching.py python tests
|
||||
|
||||
echo
|
||||
echo "Running test_mwmatching.py"
|
||||
python3 python/test_mwmatching.py
|
||||
|
||||
echo
|
||||
echo "Checking test coverage"
|
||||
coverage erase
|
||||
coverage run --branch python/test_mwmatching.py
|
||||
coverage report -m
|
||||
|
57
run_tests.sh
57
run_tests.sh
|
@ -1,57 +0,0 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
echo
|
||||
echo "Running Python unit tests"
|
||||
echo
|
||||
|
||||
python3 --version
|
||||
echo
|
||||
|
||||
python3 python/test_mwmatching.py
|
||||
|
||||
echo
|
||||
echo "Checking test coverage"
|
||||
echo
|
||||
|
||||
coverage erase
|
||||
coverage run --branch python/test_mwmatching.py
|
||||
coverage report -m
|
||||
|
||||
echo
|
||||
echo "Running Python code on test graphs"
|
||||
echo
|
||||
|
||||
python3 python/run_matching.py --verify \
|
||||
tests/graphs/chain_n1000.edge \
|
||||
tests/graphs/chain_n5000.edge \
|
||||
tests/graphs/sparse_delta_n1004.edge \
|
||||
tests/graphs/triangles_n1002.edge \
|
||||
tests/graphs/triangles_n5001.edge \
|
||||
tests/graphs/random_n1000_m10000.edge \
|
||||
tests/graphs/random_n2000_m10000.edge
|
||||
|
||||
echo
|
||||
echo "Running C++ unit tests"
|
||||
echo
|
||||
|
||||
g++ --version
|
||||
echo
|
||||
|
||||
make -C cpp run_matching test_mwmatching
|
||||
cpp/test_mwmatching
|
||||
|
||||
echo
|
||||
echo "Running C++ code on test graphs"
|
||||
echo
|
||||
|
||||
python3 tests/run_test.py --solver cpp/run_matching --verify \
|
||||
tests/graphs/chain_n1000.edge \
|
||||
tests/graphs/chain_n5000.edge \
|
||||
tests/graphs/sparse_delta_n1004.edge \
|
||||
tests/graphs/triangles_n1002.edge \
|
||||
tests/graphs/triangles_n5001.edge \
|
||||
tests/graphs/random_n1000_m10000.edge \
|
||||
tests/graphs/random_n2000_m10000.edge
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
echo
|
||||
echo ">> Running unit tests"
|
||||
echo
|
||||
|
||||
g++ --version
|
||||
echo
|
||||
|
||||
make -C cpp clean
|
||||
make -C cpp run_matching test_mwmatching test_concatenable_queue test_priority_queue
|
||||
cpp/test_mwmatching
|
||||
cpp/test_concatenable_queue
|
||||
cpp/test_priority_queue
|
||||
|
||||
echo
|
||||
echo ">> Running test graphs"
|
||||
echo
|
||||
|
||||
python3 tests/run_test.py --solver cpp/run_matching --verify \
|
||||
tests/graphs/chain_n1000.edge \
|
||||
tests/graphs/chain_n5000.edge \
|
||||
tests/graphs/chain_n10000.edge \
|
||||
tests/graphs/sparse_delta_n1004.edge \
|
||||
tests/graphs/sparse_delta_n2004.edge \
|
||||
tests/graphs/sparse_delta_n5004.edge \
|
||||
tests/graphs/triangles_n1002.edge \
|
||||
tests/graphs/triangles_n5001.edge \
|
||||
tests/graphs/triangles_n10002.edge \
|
||||
tests/graphs/random_n1000_m10000.edge \
|
||||
tests/graphs/random_n2000_m10000.edge \
|
||||
tests/graphs/random_n4000_m10000.edge
|
||||
|
||||
echo
|
||||
echo ">> Done"
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
python3 --version
|
||||
|
||||
echo
|
||||
echo ">> Running pycodestyle"
|
||||
pycodestyle python/mwmatching python/run_matching.py tests
|
||||
|
||||
echo
|
||||
echo ">> Running mypy"
|
||||
mypy --disallow-incomplete-defs python tests
|
||||
|
||||
echo
|
||||
echo ">> Running pylint"
|
||||
pylint --ignore=test_algorithm.py python tests/*.py tests/generate/*.py || [ $(($? & 3)) -eq 0 ]
|
||||
|
||||
echo
|
||||
echo ">> Running unit tests"
|
||||
python3 -m unittest discover -t python -s python/tests
|
||||
|
||||
echo
|
||||
echo ">> Checking test coverage"
|
||||
coverage erase
|
||||
coverage run --branch -m unittest discover -t python -s python/tests
|
||||
coverage report -m
|
||||
|
||||
echo
|
||||
echo ">> Running test graphs"
|
||||
python3 python/run_matching.py --verify \
|
||||
tests/graphs/chain_n1000.edge \
|
||||
tests/graphs/chain_n5000.edge \
|
||||
tests/graphs/chain_n10000.edge \
|
||||
tests/graphs/sparse_delta_n1004.edge \
|
||||
tests/graphs/sparse_delta_n2004.edge \
|
||||
tests/graphs/sparse_delta_n5004.edge \
|
||||
tests/graphs/triangles_n1002.edge \
|
||||
tests/graphs/triangles_n5001.edge \
|
||||
tests/graphs/triangles_n10002.edge \
|
||||
tests/graphs/random_n1000_m10000.edge \
|
||||
tests/graphs/random_n2000_m10000.edge \
|
||||
tests/graphs/random_n4000_m10000.edge
|
||||
|
||||
echo
|
||||
echo ">> Done"
|
||||
|
|
@ -7,7 +7,7 @@ described by Gabow.
|
|||
Reference: H. N. Gabow, "An efficient implementation of Edmonds'
|
||||
algorithm for maximum matching on graphs", JACM 23
|
||||
(1976), pp. 221-234.
|
||||
|
||||
|
||||
Based on Fortran program "hardcard.f" by R. Bruce Mattingly, 1991.
|
||||
Rewritten in Python by Joris van Rantwijk, 2023.
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ from typing import TextIO
|
|||
|
||||
def write_dimacs_graph(
|
||||
f: TextIO,
|
||||
edges: list[tuple[int, int, int|float]]
|
||||
edges: list[tuple[int, int, float]]
|
||||
) -> None:
|
||||
"""Write a graph in DIMACS edge list format."""
|
||||
|
||||
|
@ -38,7 +38,7 @@ def make_random_graph(
|
|||
max_weight: float,
|
||||
float_weights: bool,
|
||||
rng: random.Random
|
||||
) -> list[tuple[int, int, int|float]]:
|
||||
) -> list[tuple[int, int, float]]:
|
||||
"""Generate a random graph with random edge weights."""
|
||||
|
||||
edge_set: set[tuple[int, int]] = set()
|
||||
|
@ -59,9 +59,9 @@ def make_random_graph(
|
|||
rng.shuffle(edge_candidates)
|
||||
edge_set.update(edge_candidates[:m])
|
||||
|
||||
edges: list[tuple[int, int, int|float]] = []
|
||||
edges: list[tuple[int, int, float]] = []
|
||||
for (x, y) in sorted(edge_set):
|
||||
w: int|float
|
||||
w: float
|
||||
if float_weights:
|
||||
w = rng.uniform(1.0e-8, max_weight)
|
||||
else:
|
||||
|
|
|
@ -19,26 +19,25 @@ count_delta_step = [0]
|
|||
|
||||
def patch_matching_code() -> None:
|
||||
"""Patch the matching code to count events."""
|
||||
# pylint: disable=import-outside-toplevel,protected-access
|
||||
# pylint: disable=import-outside-toplevel
|
||||
|
||||
import mwmatching
|
||||
from mwmatching.algorithm import MatchingContext
|
||||
|
||||
orig_make_blossom = mwmatching._MatchingContext.make_blossom
|
||||
orig_substage_calc_dual_delta = (
|
||||
mwmatching._MatchingContext.substage_calc_dual_delta)
|
||||
orig_make_blossom = MatchingContext.make_blossom
|
||||
orig_calc_dual_delta_step = MatchingContext.calc_dual_delta_step
|
||||
|
||||
def stub_make_blossom(*args: Any, **kwargs: Any) -> Any:
|
||||
count_make_blossom[0] += 1
|
||||
return orig_make_blossom(*args, **kwargs)
|
||||
|
||||
def stub_substage_calc_dual_delta(*args: Any, **kwargs: Any) -> Any:
|
||||
def stub_calc_dual_delta_step(*args: Any, **kwargs: Any) -> Any:
|
||||
count_delta_step[0] += 1
|
||||
ret = orig_substage_calc_dual_delta(*args, **kwargs)
|
||||
ret = orig_calc_dual_delta_step(*args, **kwargs)
|
||||
return ret
|
||||
|
||||
mwmatching._MatchingContext.make_blossom = stub_make_blossom # type: ignore
|
||||
mwmatching._MatchingContext.substage_calc_dual_delta = ( # type: ignore
|
||||
stub_substage_calc_dual_delta)
|
||||
MatchingContext.make_blossom = stub_make_blossom # type: ignore
|
||||
MatchingContext.calc_dual_delta_step = ( # type: ignore
|
||||
stub_calc_dual_delta_step)
|
||||
|
||||
|
||||
def run_max_weight_matching(
|
||||
|
|
|
@ -0,0 +1,51 @@
|
|||
#!/bin/bash
|
||||
|
||||
PYMWM=../python/run_matching.py
|
||||
CPPMWM=../cpp/run_matching
|
||||
LEMON=lemon/lemon_matching
|
||||
|
||||
# Generate big graphs that are not in the Git repo.
|
||||
python3 generate/make_slow_graph.py chain 20000 > graphs/chain_n20000.edge
|
||||
python3 generate/make_slow_graph.py chain 50000 > graphs/chain_n50000.edge
|
||||
python3 generate/triangles.py 16667 1 > graphs/triangles_n50001.edge
|
||||
|
||||
# Run pre-generated graphs.
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --runs 5 --verify \
|
||||
graphs/chain_n1000.edge \
|
||||
graphs/chain_n5000.edge \
|
||||
graphs/chain_n10000.edge \
|
||||
graphs/sparse_delta_n1004.edge \
|
||||
graphs/sparse_delta_n2004.edge \
|
||||
graphs/sparse_delta_n5004.edge \
|
||||
graphs/triangles_n1002.edge \
|
||||
graphs/triangles_n5001.edge \
|
||||
graphs/triangles_n10002.edge
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --runs 5 \
|
||||
graphs/chain_n20000.edge \
|
||||
graphs/chain_n50000.edge \
|
||||
graphs/triangles_n50001.edge
|
||||
|
||||
# Run random graphs.
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 1000 --m 5000
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 1000 --m 31622
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 1000 --m 499500
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 2000 --m 10000
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 2000 --m 89442
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 2000 --m 1999000
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 5000 --m 25000
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 5000 --m 353553
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 10000 --m 50000
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 10000 --m 1000000
|
||||
|
||||
./run_test.py --solver $PYMWM --solver $CPPMWM --solver $LEMON --timeout 600 --random --runs 25 --n 20000 --m 100000
|
||||
|
|
@ -29,7 +29,7 @@ class SolverError(Exception):
|
|||
|
||||
class Graph(NamedTuple):
|
||||
"""Represents a graph. Vertex indices start from 0."""
|
||||
edges: list[tuple[int, int, int|float]]
|
||||
edges: list[tuple[int, int, float]]
|
||||
|
||||
def num_vertex(self) -> int:
|
||||
"""Count number of vertices."""
|
||||
|
@ -55,11 +55,11 @@ class RunStatus(enum.IntEnum):
|
|||
class RunResult(NamedTuple):
|
||||
"""Represent the result of running a solver on a graph."""
|
||||
status: RunStatus = RunStatus.OK
|
||||
weight: int|float = 0
|
||||
weight: float = 0
|
||||
run_time: Sequence[float] = ()
|
||||
|
||||
|
||||
def parse_int_or_float(s: str) -> int|float:
|
||||
def parse_int_or_float(s: str) -> float:
|
||||
"""Convert a string to integer or float value."""
|
||||
try:
|
||||
return int(s)
|
||||
|
@ -130,11 +130,11 @@ def write_dimacs_graph(f: TextIO, graph: Graph) -> None:
|
|||
print(f"e {x+1} {y+1} {w:.12g}", file=f)
|
||||
|
||||
|
||||
def read_dimacs_matching(f: TextIO) -> tuple[int|float, Matching]:
|
||||
def read_dimacs_matching(f: TextIO) -> tuple[float, Matching]:
|
||||
"""Read a matching solution in DIMACS format."""
|
||||
|
||||
have_weight = False
|
||||
weight: int|float = 0
|
||||
weight: float = 0
|
||||
pairs: list[tuple[int, int]] = []
|
||||
|
||||
for line in f:
|
||||
|
@ -208,9 +208,9 @@ def make_random_graph(
|
|||
rng.shuffle(edge_candidates)
|
||||
edge_set.update(edge_candidates[:m])
|
||||
|
||||
edges: list[tuple[int, int, int|float]] = []
|
||||
edges: list[tuple[int, int, float]] = []
|
||||
for (x, y) in sorted(edge_set):
|
||||
w: int|float
|
||||
w: float
|
||||
if float_weights:
|
||||
w = rng.uniform(1.0e-8, max_weight)
|
||||
else:
|
||||
|
@ -220,14 +220,14 @@ def make_random_graph(
|
|||
return Graph(edges)
|
||||
|
||||
|
||||
def check_matching(graph: Graph, matching: Matching) -> int|float:
|
||||
def check_matching(graph: Graph, matching: Matching) -> float:
|
||||
"""Verify that the matching is valid and calculate its weight."""
|
||||
|
||||
edge_map: dict[tuple[int, int], int|float] = {}
|
||||
edge_map: dict[tuple[int, int], float] = {}
|
||||
for (x, y, w) in graph.edges:
|
||||
edge_map[(min(x, y), max(x, y))] = w
|
||||
|
||||
weight: int|float = 0
|
||||
weight: float = 0
|
||||
nodes_used: set[int] = set()
|
||||
|
||||
for pair in matching.pairs:
|
||||
|
@ -247,7 +247,7 @@ def check_matching(graph: Graph, matching: Matching) -> int|float:
|
|||
return weight
|
||||
|
||||
|
||||
def compare_weight(weight1: int|float, weight2: int|float) -> int:
|
||||
def compare_weight(weight1: float, weight2: float) -> int:
|
||||
"""Compare weights of matchings.
|
||||
|
||||
Returns:
|
||||
|
@ -406,7 +406,7 @@ class WmatchSolver(Solver):
|
|||
num_edge = len(graph.edges)
|
||||
|
||||
all_integer = True
|
||||
adjacent: list[list[tuple[int, int|float]]] = [
|
||||
adjacent: list[list[tuple[int, float]]] = [
|
||||
[] for i in range(num_vertex)]
|
||||
|
||||
for (x, y, w) in graph.edges:
|
||||
|
@ -451,13 +451,13 @@ def test_solver_on_graph(
|
|||
solver: Solver,
|
||||
graph: Graph,
|
||||
graph_desc: str,
|
||||
gold_weight: Optional[int|float],
|
||||
gold_weight: Optional[float],
|
||||
num_run: int
|
||||
) -> RunResult:
|
||||
"""Test the specified solver with the specified graph."""
|
||||
|
||||
solver_run_time: list[float] = []
|
||||
solver_weight: Optional[int|float] = None
|
||||
solver_weight: Optional[float] = None
|
||||
|
||||
for i in range(num_run):
|
||||
|
||||
|
@ -507,7 +507,7 @@ def test_graph(
|
|||
solvers: list[Solver],
|
||||
graph: Graph,
|
||||
graph_desc: str,
|
||||
gold_weight: Optional[int|float],
|
||||
gold_weight: Optional[float],
|
||||
num_run: int
|
||||
) -> list[RunResult]:
|
||||
"""Test all solvers with the specified graph."""
|
||||
|
@ -525,7 +525,7 @@ def test_graph(
|
|||
|
||||
if gold_weight is None:
|
||||
|
||||
best_weight: Optional[int|float] = None
|
||||
best_weight: Optional[float] = None
|
||||
for result in results:
|
||||
if result.status == RunStatus.OK:
|
||||
if (best_weight is None) or (result.weight > best_weight):
|
||||
|
@ -668,7 +668,7 @@ def test_input(
|
|||
file=sys.stderr)
|
||||
return 1
|
||||
|
||||
gold_weight: Optional[int|float] = None
|
||||
gold_weight: Optional[float] = None
|
||||
if verify:
|
||||
reffile = os.path.splitext(filename)[0] + ".out"
|
||||
try:
|
||||
|
@ -740,7 +740,8 @@ def main() -> int:
|
|||
parser.add_argument("--timeout",
|
||||
action="store",
|
||||
type=float,
|
||||
help="abort when solver runs longer than TIMEOUT seconds")
|
||||
help="abort when solver runs longer than TIMEOUT"
|
||||
" seconds")
|
||||
parser.add_argument("--runs",
|
||||
action="store",
|
||||
type=int,
|
||||
|
|
|
@ -0,0 +1,51 @@
|
|||
|
||||
[tox]
|
||||
skipsdist = true
|
||||
env_list =
|
||||
static
|
||||
coverage
|
||||
py37
|
||||
py38
|
||||
py39
|
||||
py310
|
||||
py311
|
||||
py312
|
||||
py313
|
||||
pypy3
|
||||
|
||||
[testenv]
|
||||
commands =
|
||||
python --version
|
||||
python -m unittest discover -t python -s python/tests
|
||||
python python/run_matching.py --verify \
|
||||
tests/graphs/chain_n1000.edge \
|
||||
tests/graphs/chain_n5000.edge \
|
||||
tests/graphs/chain_n10000.edge \
|
||||
tests/graphs/sparse_delta_n1004.edge \
|
||||
tests/graphs/sparse_delta_n2004.edge \
|
||||
tests/graphs/sparse_delta_n5004.edge \
|
||||
tests/graphs/triangles_n1002.edge \
|
||||
tests/graphs/triangles_n5001.edge \
|
||||
tests/graphs/triangles_n10002.edge \
|
||||
tests/graphs/random_n1000_m10000.edge \
|
||||
tests/graphs/random_n2000_m10000.edge \
|
||||
tests/graphs/random_n4000_m10000.edge
|
||||
|
||||
[testenv:static]
|
||||
deps =
|
||||
mypy
|
||||
pycodestyle
|
||||
pylint
|
||||
commands =
|
||||
pycodestyle python/mwmatching python/run_matching.py tests
|
||||
mypy --disallow-incomplete-defs python tests
|
||||
pylint --ignore=test_algorithm.py python tests/*.py tests/generate/*.py
|
||||
|
||||
[testenv:coverage]
|
||||
deps =
|
||||
coverage
|
||||
commands =
|
||||
coverage erase
|
||||
coverage run --branch -m unittest discover -t python -s python/tests
|
||||
coverage report -m
|
||||
|
Loading…
Reference in New Issue