Written by Joël Benjamin Huber.
Contents
Trees are a very special kind of graph and many procedures on trees can be optimized. One of these optimizations is called “Smaller to Larger”, also known as “DSU (Disjoint Set Union) on Trees”. We will later see why it is called “Smaller to Larger”.
Introductory Problem
Given a rooted tree with colored vertices, find for each vertex the number of vertices in the subtree at with the same color as .
Figure 1: An example graph for the introductory problems. The labels correspond to the solutions for the vertices.
Problems with short problem statements tend to be easy, right? The naive approach to this problem would be to start a DFS for each vertex, counting the number of vertices in the subtree with the same color. This runs in . Let’s take a closer look at what we’re doing. It’s often useful to look at the information we discard while calculating or what we’re calculating several times. Looking at what our algorithm is doing, we can see that we go through each subtree several times again and again. Suppose we know for a vertex and for every color the number of vertices with that color in the subtree rooted at . Then there’s no need to go through the whole subtree again. So we get the idea that instead of doing a DFS from each vertex, we save for each vertex the information about the subtree of this vertex and to get this information for a vertex, we somehow merge the information of the children together. In this case, we could create a map from colors to the number of vertices with this color for each subtree. We can create a map for a vertex by copying the elements from the maps from the children into a new map and then add the color of the new vertex.
Note that the size of the maps is bounded from above by the number of elements in the subtree. The problem is, that we can have elements in both of the maps, und since for each vertex, we copy the elements into a new map, our running time is , which is even worse than what we had before. The advantage is now, that we don’t worry about going through the subtrees anymore, the “slowness” of the algorithm lies now in the way we copy the elements. But we can speed this up. Let’s see how.
Instead of creating a new map for each vertex, we can keep one of the maps of the children and copy the other maps into this one. Which map should we choose to keep? Intuitively, we should keep the largest one since this will minimize the number of elements we need to copy. And this will work and reduce our total running time. This is the trick named smaller to larger: When dealing with some kind of tree structure and we need to merge sets, when – for each merge operation – we keep the large set and copy the elements of the smaller set into the larger set, then our running time will be reduced. Let be the number of vertices and be the number of elements. Then our running time is . In most of the time, , and we use the normal set::insert() / map::operator[] functions, we get a running time of . Let’s first see why this makes sense intuitively, before going on to a formal proof.
Intuition
For sake of simplicity, assume our tree is a binary tree. Let’s fix a vertex . If we now merge the maps of the two children arbitrarily, our worst-case is merging a very big into a very small map. Now if we’re clever, we check which map is smaller and merge the smaller map into the larger one. Then our new worst-case is when the tree is balanced, otherwise we save some operations. But when the tree is balanced, the height is about and so we copy each element only about times. Combining this, we should get something around . This is actually the final time complexity of this algorithm; however, this “intuition” is very far from a correct proof. So, let’s get to the proof.
Formal Proof
For the formal proof, we change the structure a bit. Suppose we have sets initially, with a total of elements in them. Let’s define an operation onto a set to add a new element to the set, let’s say S.add_element(e) adds element e to S. We also need to add the restrictions that add_element increases the cardinality of the set by at most one. For example, the normal insert() operation is a valid operation for add_element. Or in our case, we can make our elements pairs of integers {color, number of vertices with this color}, which can be coded easier by using a map. We can then choose our add_element() operation to be the following:
def add_element(col, cnt): if ({col, k} already in the set for some k): replace {col, k} by {col, k + cnt} else: insert {col, cnt} into the set
This can be done with C++ maps in a very easy way:
void add_element(map<int,int>& mp, pair<int,int> p) { mp[p.first] += p.second; }
Note that this operation is both associative and increases the cardinality of the set by at most one. Now we want to do the following: We want to repeatedly do the following operation until only one set is left: Choose some sets, and replace them with one set S constructed by:
def merge(set S_0, set S_1, ...) S = set of S_0, S_1, ... with largest cardinality for (each set S_0, S_1, ... not initially chosen as S): for (each element e in current set): S.add_element(e); return S
Note that this is what we were doing before: In the beginning, each vertex is a single set, and then we start merging the sets according to the tree structure, which can be arbitrary. Let’s say that if add_element(e) increases the cardinality of the new set by one, then the new element is a successor of e. Now let e be an element in any of the initial sets. Let’s look how many times we apply add_element with e or a (direct or indirect) successor of e as argument. Note that since we discard the “old” sets after we merge the sets, there is always at most one successor of e (or e itself) alive at the moment.
Note that if add_element(e) does not add an element to the target set, then there is no successor of e anymore and there will never be. So we will never merge e into a new set anymore, thus reducing the total number of moves we make. So we can assume that add_element(e) always adds exactly one element, since this will always be worse for us.
Let’s now look at how many times we use add_element with e or a successor of e as argument. To be able to bound the number of times we do this, we need an invariant: The cardinality of the set the current successor of e (or e itself) is in. Why should we choose this as an invariant? Because we somehow need to use that we always merge into the largest set.
Now denote the set our element is currently in as , the largest set of the ones we merge together as , and the set we get in the end as . Note that we don’t use add_element when our element is in the largest set, because we keep the largest set as it is. Writing down some inequalities, we get . This is very useful: Every time we use add_element with e or a successor of e, the cardinality of the set the next successor is in is at least twice as big as the set of the old element. We can use this to bound the number of calls to add_element, because the cardinality of the set cannot exceed (The total number of elements cannot increase, since at most one successor of each element can be “alive”). Thus, we can write this bound: Let be the number of calls to add_element with e or a successor of e as element: Then must hold. Taking the logarithm on both sides, we get that . Now each element in any of the initial set adds at most calls to add_element, so the total time we need for all the calls to add_element is . We should not forget to add because we need look at each set, but normally the time we need for merging dominates the total running time. Most of the time, and , which gets us a runtime of .
Sample implementation
Here you can find an implementation of Smaller to Larger, which (instead of merging all the children together in one merging process) merges the sets of the children one by one. This implementation counts the number of distinct colors in each subtree. What do you need to modify such that it solves our introductory problem?
struct graph { vector<vector<int>> adjlist; vector<int> col; vector<int> sol; set<int> dfs(int curr, int par) { set<int> cst(); // Create a new set cst.insert(col[curr]); // Insert color of this vertex for (auto next : adjlist[curr]) { if (next == par) continue; auto nst = dfs(next, curr); // Get the set of this child if (nst.size() > cst.size()) swap(nst, cst); // If the new set is smaller, swap it such that we copy into the larger set for (auto p : nst) { cst.insert(p); // Copy the values from the smaller to the larger set } } sol[curr] = cst.size(); // Save the solution for this vertex return cst; // Return the current set. } };
Sample Problems
BOI 2017 Railway
Disclaimer: With upwards, I mean in the direction to the root, with downwards away from the root.
The ministry of infrastructure in Bergen wants to build a new railway in order to connect the stations. They only wanted to build connecting tracks such that all stations are connected. However, they found out they cannot build all of the tracks in time. So they decided to ask ministers which tracks they think should be built. Each minister wrote down a list of neighbourhoods they think should be connected: The -th minister wrote down vertices . For each pair of vertices in his list, he wants all the tracks on the direct path between these two vertices to be built. The ministry of infrastructure then decided to construct all the tracks that are requested by at least ministers. Our task is to figure out which of the tracks should be built. The limits in the problem are , and .
In this task, we need to exploit the tree-structure several times. First, we may try to look how the tracks a minister wants to build look like. Before reading on, the reader should take a paper and try to find some nice way to describe these tracks.
In such problems, it’s always useful to root the tree. After looking at some examples, it seems like plays an important role. Indeed, it seems like the minister thinks a track should be built if and only if it lies on the direct path between any selected station and the of all selected stations. And this is not hard to prove with some basic facts about and the fact that each path from to in the tree can be split up in a path upwards from to and a path downwards from to .
Now, since we understood how the selected tracks from each minister look like, we somehow need to figure out how we can process the tracks efficiently. Suppose a minister selected a station . We can get all the tracks this station adds, by starting at this vertex and always moving upwards and “marking” the visited edges until we reach the and stop there. This already smells a bit like smaller to larger. Let’s say that a minister is active on a vertex if the edge from upwards is selected by this minister. We can see that this is the case [if the same minister is active on a child of and or the minister selected this station] and [ is not the of all the vertices the minister selected]. This is solvable with smaller to larger: We try to calculate for each vertex the set of active ministers. With smaller to larger, we can merge the sets of the children efficiently. We only need to store if a vertex is selected by a minister, then we need to insert that minister, or if a station is the of the stations of a minister, we need to take this minister out of the set (after merging). For each edge, we can easily calculate the number of ministers who want to build this edge: It’s just the number of active ministers on the vertex directly below the edge.
CEOI 2019 Magictree
In this problem, we’re given a rooted tree. Some vertices of this tree have a fruit. Each fruit is ripe for one day and has a juicy-value of . We can harvest a ripe fruit by cutting off its vertex. But then all the fruits we didn’t harvest which are not connected to the root anymore die. Our goal is to maximize the total juicyness of the fruit we harvest.
This looks loke some kind of dp. Indeed, we can define as the maximal juiciness we can harvest from the subtree rooted at in the first days. The recursion formula is not hard to come up with, when we look at the 2 cases (cut and no cut):
Our dp has states, and this is also the final running time of this algorithm (What happens to the transitions?). But need to do better. Clearly, we want to get rid of the factor . So what is missing? The observations.
Let’s plot the dp-array of a vertex. We can make 2 observations: The array consists of several constant segments and it is non-decreasing. The second is obvious: Since we’re looking at the first days, if we look at the first days, we could potentially do the same cuts as in the first days if there is no better option. For the first one, we can see that the interesting times are only the ones where a fruit is ripe. If at day , there is no new ripe fruit, we still have the same possibilities as on day . Thus we can compress the times to get a solution.
Figure 2: Plot of the dp of a vertex over the time. The red line marks the values which takes the fruit at vertex .
But instead of compressing, we just change the representation of our dp. Instead of saving the values, we save the differences between the values. You can see this as some kind of the inverse to the prefix-sum: If we do prefix-sums on our new array, we get the original dp back. Note that we lose the ability to look at a value of our dp in but we don’t care since we need the values once, namely of the root, so we have plenty of time to calculate the values then. Since in this array, most of the values are zeroes, we discard them and store the rest in a map.
What about transitions? Note that summing up the original dp-arrays () is equivalent to summing up our new changes-arrays. Thus we can just merge our maps and add the elements with the same key together. Hey, we’re merging maps! Does this ring a bell? It should! We can merge smaller maps into larger ones. This will reduce our running time.
This is not yet the whole transition formula. However, we can notice (looking at the graph again), the formula just adds another constant segment into the sum we calculated, which can be added with an entry at position with value . But if we only add this entry, we need to subtract in total from the next elements, since we don’t want to increase everything that will come later, we only want to add this new segment. So we subtract from the next entries until we subtracted in total, deleting empty entries. Note that since we insert each element only once, we can erase it only once, and so we delete amortized elements in total.
We only insert one element per vertex, so with smaller to larger, we get a total running time of