In computational geometry, a maximum disjoint set (MDS) is a largest set of nonoverlapping geometric shapes selected from a given set of candidate shapes.
Every set of nonoverlapping shapes is an independent set in the intersection graph of the shapes. Therefore, the MDS problem is a special case of the maximum independent set (MIS) problem. Both problems are NP complete, but finding a MDS may be easier than finding a MIS in two respects:
Finding an MDS is important in applications such as automatic label placement, VLSI circuit design, and cellular frequency division multiplexing.
The MDS problem can be generalized by assigning a different weight to each shape and searching for a disjoint set with a maximum total weight.
In the following text, MDS(C) denotes the maximum disjoint set in a set C.
Given a set C of shapes, an approximation to MDS(C) can be found by the following greedy algorithm:
For every shape x that we add to S, we lose the shapes in N(x), because they are intersected by x and thus cannot be added to S later on. However, some of these shapes themselves intersect each other, and thus in any case it is not possible that they all be in the optimal solution MDS(S). The largest subset of shapes that can all be in the optimal solution is MDS(N(x)). Therefore, selecting an x that minimizes MDS(N(x)) minimizes the loss from adding x to S.
In particular, if we can guarantee that there is an x for which MDS(N(x)) is bounded by a constant (say, M), then this greedy algorithm yields a constant Mfactor approximation, as we can guarantee that:

S

≥

M
D
S
(
C
)

M
{\displaystyle S\geq {\frac {MDS(C)}{M}}}
Such an upper bound M exists for several interesting cases:
When C is a set of intervals on a line, M=1, and thus the greedy algorithm finds the exact MDS. To see this, assume w.l.o.g. that the intervals are vertical, and let x be the interval with the highest bottom endpoint. All other intervals intersected by x must cross its bottom endpoint. Therefore, all intervals in N(x) intersect each other, and MDS(N(x)) has a size of at most 1 (see figure).
Therefore, in the 1dimensional case, the MDS can be found exactly in time O(n log n):
This algorithm is analogous to the earliest deadline first scheduling solution to the interval scheduling problem.
In contrast to the 1dimensional case, in 2 or more dimensions the MDS problem becomes NPcomplete, and thus has either exact superpolynomial algorithms or approximate polynomial algorithms.
When C is a set of unit disks, M=3, because the leftmost disk (the disk whose center has the smallest x coordinate) intersects at most 3 other disjoint disks (see figure). Therefore the greedy algorithm yields a 3approximation, i.e., it finds a disjoint set with a size of at least MDS(C)/3.
Similarly, when C is a set of axisparallel unit squares, M=2.
When C is a set of arbitrarysize disks, M=5, because the disk with the smallest radius intersects at most 5 other disjoint disks (see figure).
Similarly, when C is a set of arbitrarysize axisparallel squares, M=4.
Other constants can be calculated for other regular polygons.
The most common approach to finding a MDS is divideandconquer. A typical algorithm in this approach looks like the following:
The main challenge with this approach is to find a geometric way to divide the set into subsets. This may require to discard a small number of shapes that do not fit into any one of the subsets, as explained in the following subsections.
Let C be a set of n axisparallel rectangles in the plane, all with the same height H but with varying lengths. The following algorithm finds a disjoint set with a size of at least MDS(C)/2 in time O(n log n):
Let C be a set of n axisparallel rectangles in the plane, all with the same height but with varying lengths. There is an algorithm that finds a disjoint set with a size of at least MDS(C)/(1 + 1/k) in time O(n2k−1), for every constant k > 1.
The algorithm is an improvement of the abovementioned 2approximation, by combining dynamic programming with the shifting technique of Hochbaum and Maass.
This algorithm can be generalized to d dimensions. If the labels have the same size in all dimensions except one, it is possible to find a similar approximation by applying dynamic programming along one of the dimensions. This also reduces the time to n^O(1/e).
Let C be a set of n axisparallel rectangles in the plane. The following algorithm finds a disjoint set with a size of at least

M
D
S
(
C
)

log
n
{\displaystyle {\frac {MDS(C)}{\log {n}}}}
in time
O
(
n
log
n
)
{\displaystyle O(n\log {n})}
:
It is provable by induction that, at the last step, either
M
l
e
f
t
∪
M
r
i
g
h
t
{\displaystyle M_{\mathrm {left} }\cup M_{\mathrm {right} }}
or
M
i
n
t
{\displaystyle M_{\mathrm {int} }}
have a cardinality of at least

M
D
S
(
C
)

log
n
{\displaystyle {\frac {MDS(C)}{\log {n}}}}
.
Chalermsookk and Chuzoy have improved the factor to
O
(
log
log
n
)
{\displaystyle O(\log {\log {n}})}
.
Chalermsook and Walczak have presented an
O
(
log
log
n
)
{\displaystyle O(\log {\log {n}})}
approximation algorithm to the more general setting, in which each rectangle has a weight, and the goal is to find an independent set of maximum total weight.
For a long time, it was not known whether a constantfactor approximation exists for axisparallel rectangles of different lengths and heights. It was conjectured that such an approximation could be found using guillotine cuts. Particularly, if there exists a guillotine separation of axesparallel rectangles in which
O
(
n
)
{\displaystyle \Omega (n)}
rectangles are separated, then it can be used in a dynamic programming approach to find a constantfactor approximation to the MDS.: sub.1.2
To date, it is not known whether such a guillotine separation exists. However, there are constantfactor approximation algorithms using nonguillotine cuts:
Let C be a set of n squares or circles of identical size. Hochbaum and Maass presented a polynomialtime approximation scheme for finding an MDS using a simple shiftedgrid strategy. It finds a solution within (1 − e) of the maximum in time nO(1/e2) time and linear space. The strategy generalizes to any collection of fat objects of roughly the same size (i.e., when the maximumtominimum size ratio is bounded by a constant).
Let C be a set of n fat objects, such as squares or circles, of arbitrary sizes. There is a PTAS for finding an MDS based on multilevel grid alignment. It has been discovered by two groups in approximately the same time, and described in two different ways.
An algorithm of Erlebach, Jansen and Seidel finds a disjoint set with a size of at least (1 − 1/k)2 · MDS(C) in time nO(k2), for every constant k > 1. It works in the following way.
Scale the disks so that the smallest disk has diameter 1. Partition the disks to levels, based on the logarithm of their size. I.e., the jth level contains all disks with diameter between (k + 1)j and (k + 1)j+1, for j ≤ 0 (the smallest disk is in level 0).
For each level j, impose a grid on the plane that consists of lines that are (k + 1)j+1 apart from each other. By construction, every disk can intersect at most one horizontal line and one vertical line from its level.
For every r, s between 0 and k, define D(r,s) as the subset of disks that are not intersected by any horizontal line whose index modulo k is r, nor by any vertical line whose index modulu k is s. By the pigeonhole principle, there is at least one pair (r,s) such that

M
D
S
(
D
(
r
,
s
)
)

≥
(
1
−
1
k
)
2
⋅

M
D
S

{\displaystyle \mathrm {MDS} (D(r,s))\geq (1{\frac {1}{k}})^{2}\cdot \mathrm {MDS} }
, i.e., we can find the MDS only in D(r,s) and miss only a small fraction of the disks in the optimal solution:
An algorithm of Chan finds a disjoint set with a size of at least (1 − 2/k)·MDS(C) in time nO(k), for every constant k > 1.
The algorithm uses shifted quadtrees. The key concept of the algorithm is alignment to the quadtree grid. An object of size r is called kaligned (where k ≥ 1 is a constant) if it is inside a quadtree cell of size at most kr (R ≤ kr).
By definition, a kaligned object that intersects the boundary of a quatree cell of size R must have a size of at least R/k (r > R/k). The boundary of a cell of size R can be covered by 4k squares of size R/k; hence the number of disjoint fat objects intersecting the boundary of that cell is at most 4kc, where c is a constant measuring the fatness of the objects.
Therefore, if all objects are fat and kaligned, it is possible to find the exact maximum disjoint set in time nO(kc) using a divideandconquer algorithm. Start with a quadtree cell that contains all objects. Then recursively divide it to smaller quadtree cells, find the maximum in each smaller cell, and combine the results to get the maximum in the larger cell. Since the number of disjoint fat objects intersecting the boundary of every quadtree cell is bounded by 4kc, we can simply "guess" which objects intersect the boundary in the optimal solution, and then apply divideandconquer to the objects inside.
If almost all objects are kaligned, we can just discard the objects that are not kaligned, and find a maximum disjoint set of the remaining objects in time nO(k). This results in a (1 − e) approximation, where e is the fraction of objects that are not kaligned.
If most objects are not kaligned, we can try to make them kaligned by shifting the grid in multiples of (1/k,1/k). First, scale the objects such that they are all contained in the unit square. Then, consider k shifts of the grid: (0,0), (1/k,1/k), (2/k,2/k), ..., ((k − 1)/k,(k − 1)/k). I.e., for each j in {0,...,k − 1}, consider a shift of the grid in (j/k,j/k). It is possible to prove that every label will be 2kaligned for at least k − 2 values of j. Now, for every j, discard the objects that are not kaligned in the (j/k,j/k) shift, and find a maximum disjoint set of the remaining objects. Call that set A(j). Call the real maximum disjoint set is A*. Then:
∑
j
=
0
,
…
,
k
−
1

A
(
j
)

≥
(
k
−
2
)

A
∗

{\displaystyle \sum _{j=0,\ldots ,k1}{A(j)}\geq (k2)A*}
Therefore, the largest A(j) has a size of at least: (1 − 2/k)A*. The return value of the algorithm is the largest A(j); the approximation factor is (1 − 2/k), and the run time is nO(k). We can make the approximation factor as small as we want, so this is a PTAS.
Both versions can be generalized to d dimensions (with different approximation ratios) and to the weighted case.
Several divideandconquer algorithms are based on a certain geometric separator theorem. A geometric separator is a line or shape that separates a given set of shapes to two smaller subsets, such that the number of shapes lost during the division is relatively small. This allows both PTASs and subexponential exact algorithms, as explained below.
Let C be a set of n fat objects, such as squares or circles, of arbitrary sizes. Chan described an algorithm finds a disjoint set with a size of at least (1 − O(√b))·MDS(C) in time nO(b), for every constant b > 1.
The algorithm is based on the following geometric separator theorem, which can be proved similarly to the proof of the existence of geometric separator for disjoint squares:
where a and c are constants. If we could calculate MDS(C) exactly, we could make the constant a as low as 2/3 by a proper selection of the separator rectangle. But since we can only approximate MDS(C) by a constant factor, the constant a must be larger. Fortunately, a remains a constant independent of C.
This separator theorem allows to build the following PTAS:
Select a constant b. Check all possible combinations of up to b + 1 labels.
Let E(m) be the error of the above algorithm when the optimal MDS size is MDS(C) = m. When m ≤ b, the error is 0 because the maximum disjoint set is calculated exactly; when m > b, the error increases by at most c√m the number of labels intersected by the separator. The worst case for the algorithm is when the split in each step is in the maximum possible ratio which is a:(1 − a). Therefore the error function satisfies the following recurrence relation:
The solution to this recurrence is:
i.e.,
E
(
m
)
=
O
(
m
/
b
)
{\displaystyle E(m)=O(m/{\sqrt {b}})}
. We can make the approximation factor as small as we want by a proper selection of b.
This PTAS is more spaceefficient than the PTAS based on quadtrees, and can handle a generalization where the objects may slide, but it cannot handle the weighted case.
Let C be a set of n disks, such that the ratio between the largest radius and the smallest radius is at most r. The following algorithm finds MDS(C) exactly in time
2
O
(
r
⋅
n
)
{\displaystyle 2^{O(r\cdot {\sqrt {n}})}}
.
The algorithm is based on a widthbounded geometric separator on the set Q of the centers of all disks in C. This separator theorem allows to build the following exact algorithm:
The run time of this algorithm satisfies the following recurrence relation:
The solution to this recurrence is:
A pseudodisksset is a set of objects in which the boundaries of every pair of objects intersect at most twice (Note that this definition relates to a whole collection, and does not say anything about the shapes of the specific objects in the collection). A pseudodisksset has a bounded union complexity, i.e., the number of intersection points on the boundary of the union of all objects is linear in the number of objects. For example, a set of squares or circles of arbitrary sizes is a pseudodisksset.
Let C be a pseudodisksset with n objects. A local search algorithm by Chan and HarPeled finds a disjoint set of size at least
(
1
−
O
(
1
b
)
)
⋅

M
D
S
(
C
)

{\displaystyle (1O({\frac {1}{\sqrt {b}}}))\cdot MDS(C)}
in time
O
(
n
b
+
3
)
{\displaystyle O(n^{b+3})}
, for every integer constant
b
≥
0
{\displaystyle b\geq 0}
:
Every exchange in the search step increases the size of S by at least 1, and thus can happen at most n times.
The algorithm is very simple; the difficult part is to prove the approximation ratio.
See also.
Let C be a pseudodisksset with n objects and union complexity u. Using linear programming relaxation, it is possible to find a disjoint set of size at least
n
u
⋅

M
D
S
(
C
)

{\displaystyle {\frac {n}{u}}\cdot MDS(C)}
. This is possible either with a randomized algorithm that has a high probability of success and run time
O
(
n
3
)
{\displaystyle O(n^{3})}
, or a deterministic algorithm with a slower run time (but still polynomial). This algorithm can be generalized to the weighted case.