*Written by Stefanie Zbinden.* *Translated by Stefanie Zbinden.*

Contents

We define bridges and articulation point and come up with an algorithm that computes them in linear time.

As many graph algorithms, this is a clever application of DFS. If you are not familiar with dfs, you might consider reading the dfs article first.

In this article, all graphs are simple and undirected.

## DFS preorder, postorder and other properties

We will first have a look at dfs and some properties to use this later in order to find an efficient algorithm to find articulation points and bridges (we will explain later what they are).

For simplicity, we will assume our graph is connected and observe what happens when we run a dfs starting from some node \(r\). The dfs code we use for this is the following:

void dfs(vector<vector<int>>& graph, vector<int>& visited, int node) { visited[node] = 1; for (int next : graph[node]) if (visited[next] == 0) dfs(graph, visited, next); visited[node] = 2; }

**Remark:** \(graph\) is the adjacency list of our graph and \(visited\) is the visited array, which is set to 0 at the beginning.

At each step of the dfs, the nodes will have one of the following states:

- The node has
**not been called**by the dfs yet. In this case, visited is equal to 0. We will say such a node is a*white node*. - The node has been called by the dfs but the call has
**not returned yet**. In this case, the dfs is still processing the children of that node and visited is equal to 1. We will say that such a node is a*grey node*. - The node has been called by the dfs and the call has returned. In this case, all the neighbors of the node have been called and the node will
**not be called anymore**. Visited will be equal to 2 and we will say that such a node is*black*.

Also we can create a **rooted tree** called the **dfs tree** from the order in which the nodes are called. The root will be the node with which we initially call the dfs (in our case this is \(r\)). If when processing node \(a\) in the dfs we call node \(b\) then \(a\) will be the parent of \(b\) in the dfs tree (so there will be an edge from \(a\) to \(b\)). As we can only call neighbors from the original graph during the dfs our tree will be a subgraph of our original graph.

Now we might ask ourselves (or our rubber mice) why this tree is interesting. Using it we can classify the edges of our original graph into three categories (here, we make a distinction between the edge \((u, v)\) and \((v, u)\) for nodes \(u\) and \(v\)).

**Tree edges:**These are edges that belong to the dfs tree (so edges from a node to one of its children in the dfs tree).**Back edges:**Edges that go from a node to one of its ancestors.**Forward edges:**Edges that go from a node to one of its successors that is not its child.

**Why are these all the possible edges?** Assume there are nodes \(a\) and \(b\) such that there exists an edge between them but none is the ancestor of the other and assume \(a\) is called before \(b\) during the dfs, then this implies that we try to go to \(b\) but somehow it is not possible anymore. This implies that \(b\) has been visited while \(a\) was grey. But all nodes that have been called while \(a\) was grey will be the children of \(a\), their children or the children of their children and so on. So \(a\) is an ancestor of \(b\).

**Disclaimer:** This only works so nicely because we assumed that our graph is **undirected**. In the directed case there are more cases that can appear.

What also might be interesting is the order in which the nodes are called. This order is called **dfs-preorder** and is useful for many algorithms.

We can modify our algorithm to calculate this during the dfs as follows:

int counter=0; void dfs(vector<vector<int>>& graph, vector<int>& visited, vector<int>& preorder, int node) { visited[node] = 1; preorder[node] = counter; counter++; for (int next : graph[node]) if (visited[next] == 0) dfs(graph, visited, next); visited[node] = 2; }

There is a nice synergy between the dfs tree and the dfs preorder: the ancestor of each node has to be called before each node, so its preorder index is smaller.

**Remark:** The same can be done with the times when we return. This is then called the **dfs-postorder**.

## Articulation points and bridges

### Definition

- An
**articulation point**is a node whose removal increases the number of connected components in the graph. - Similarly, a
**bridge**is an edge whose removal increases the number of connected components in the graph. - A
**leaf**is a node with only one edge (in other words a node whose degree is one).

Note that by removing a node from the graph we also remove all its edges. Furthermore when our graph was connected being an articulation point or bridge means that the removal will disconnect the graph.

**Remark:** From now on, we will mark all articulation points green and all bridges and leaves blue.

Here are some examples:

What you can observe is that each bridge has only endpoints that are articulation points or leaves. The question is whether this is a coincidence or is always the case. In other words: Do there exist graphs which have a bridge whose endpoint is neither a leaf nor an articulation point? Does there exist a graph with two articulation points that are endpoint of an edge which is not a bridge? Is every articulation point endpoint of some bridge?

**Do there exist graphs which have a bridge whose endpoint is neither a leaf nor an articulation point? No.**
Assume we have a bridge. Removing it will lead to an increase of components in the graph, splitting a component \(C\) into two components \(C1\) and \(C2\). When we remove an endpoint of the bridge instead of the bridge itself, then the same will happen, however if the endpoint was its own connected component (this happens exactly if it was a leaf) this component vanishes.

**Does there exist a graph with two articulation points that are endpoint of an edge which is not a bridge? Yes.**

An example for this is the following graph:

There is an edge from \(b\) to \(d\) which are both articulation points but the edge is not a bridge.

**Is every articulation point endpoint of some bridge?** **No, not necessarily**.

We can again look at a counterexample:

Both \((a, b, c)\) and \((a, d, e)\) form a cycle, so removing any edge does not disconnect the graph. However, removing \(a\) will disconnect the graph (there will be the components \((b, c)\) and \((d, e)\) left and thus \(a\) is an articulation point.

### Brute force

**How can we find all the articulation points and bridges in a graph?** What we could do is for each node and each edge remove it once from the graph and count the number of connected components compared to the number of connected component in the original graph.

As we can find the number of components in a graph using dfs in \(O(V+E)\) we can find all the articulation points in \(O(V\cdot(V+E))\) and the bridges in \(O(E\cdot(V+E))\).

### Efficient algorithm - Tarjan

We didn’t have the possibility to use all the cool stuff we introduced so far, so of course there is a faster algorithm that will use it! This algorithm is called Tarjan’s algorithm and runs in linear time \(O(V+E)\).

We will do a dfs again and modify it a little bit: in addition to the index in the dfs-preorder (which we will denote by `number[node]`) we will also save for each node which is the smallest preorder index which it has an edge to (denoted by `low[node]`, not counting the parent of the node.

**Why would we be interested in that?** It turns out that a node \(u\) (which is not the root) is an articulation point if and only if there exists a child \(v\) of it such that all nodes \(w\) in the subtree of \(v\) have `low[w]>=number[u]`. And that an edge is a bridge if and only if it is a tree edge from \(u\) (the parent) to \(v\) (the child) such that all the nodes \(w\) in the subtree of \(v\) satisfy `low[v]>number[u]`.

**Why is this true?** If we look at a node \(u\) (not the root) in the dfs tree and a child \(v\) of it, we can look what happens if we remove \(u\) from the graph. The subtree having \(v\) as a root will be connected but it might not be connected to the rest of the graph. As we argued in the first part, each node has only edges that go either to their ancestors or their successors. So the only thing it can have a direct connection to will be its ancestors. As we discussed before, those all have a lower number so if we can connect to one of them we have a connection to a node with a lower index than \(u\).

Example:

The red edges mark the edges we took in the dfs tree. In this example, \(u\) is not an articulation point, as there is an edge from \(c\) to \(b\), so the minimal index we can reach in the subgraph of \(v\) is 2, which is smaller than 3.

In this example, \(u\) is an articulation point, as the lowest index node we can reach is \(u\) which does not have index strictly bigger than itself.

In this example, \(u\) is an articulation point, as there is **at least one** subgraph in which we cannot reach a node with index smaller than the index of \(u\).

**What happens to the root?** The root is a special case, as it is not possible to connect to any node with index smaller than the root. If the root has only one child, removing it will not disconnect the graph (e.g. the dfs tree is enough to connect the graph). If the root has more than one child, it is not possible that there is any connection between the subtrees, so removing the root will disconnect the graph. Hence **the root is an articulation point if and only if it has more than one child in the dfs tree**

Examples:

In the first example, the root is not an articulation point as it has only one child, in the second example it is an articulation point as it has two children.

For **bridges** the reasoning is similar: only edges on the dfs tree can be bridges (because removing all the others doesn’t disconnect the graph as long as we have the dfs tree). If we look at an edge in the dfs tree from \(u\) to \(v\) we have to worry whether removing it will disconnect the subtree at \(v\) from the rest of the graph. However, this time if we have an edge that goes to \(u\) (and is not the edge from \(u\) to \(v\)) it also saves the connectivity, that’s why we have a bridge if and only if all nodes \(w\) in the subtree of \(v\) have `low[w]>number[u]` or in other words if the lowest index node \(a\) we can reach has `number[a]<=number[u]` the edge from \(u\) to \(v\) is not a bridge.

Examples:

In the first example, the lowest index we can reach from the subtree of \(v\) is 2, which is the index of \(u\), so the edge from \(u\) to \(v\) is not a bridge. In the second example, the lowest index we can reach is 3, which is larger than the index of \(u\), so the edge from \(u\) to \(v\) is a bridge.

Implementation in C++:

vector<int> is_art; // 0 default, 1 articulation point vector<int> low, number; // -1 unvisited vector<pair<int, int>> bridges; // empty in the beginning, we will insert the bridges vector<vector<int>> graph; int time=0; int dfs(int node, int parent) { number[node] = time; low[node] = time; time++; int children = 0; int min_in_subtree = low[node]; for (int next : graph[node]) { if (next == parent) continue; if (number[next]==-1){ children++; int cur = dfs(next, node); if (cur > number[node]) bridges.push_back({node, next}); if (cur >= number[node]) is_art[node] = 1; min_in_subtree = min(min_in_subtree, cur); } low[node] = min(low[node], number[next]); } if (parent==-1) // this means that the node is the root is_art[node] = min(childrens-1, 1); return min(min_in_subtree, low[node]); } void find_bridges(){ int n=graph.size(); low=vector <int> (n, -1); number=vector <int> (n, -1); is_art=vector <int> (n, 0); time=0; for (int i=0; i<n; i++){ if (number[i]==-1) dfs(i, -1); } }

**Remark** If your graph is not connected, don’t forget to start the dfs from multiple points and adjust the root node (as done when doing normal dfs).