Introduction to Algorithm Design

Written by Daniel Wolleb-Graf. Translated by Daniel Wolleb-Graf.

Before we start

In this introductory chapter, you will learn what we at SOI mean when we say algorithm and how we can determine if one algorithm is faster than another one.

This topic is always my favorite at the SOI workshops, so it is my pleasure to also present this as a written chapter here. I try to assume no prior knowledge at all and to keep it as easy to follow as I can. If I fail and you get lost along the way or there are just some questions popping up in your head, please let me know at daniel@soi.ch. These notes are still fairly new, so we appreciate your help to keep improving them.

This chapter is based on the content of the SOI workshops and of the first two lectures on “Datenstrukturen & Algorithmen” in spring 2016 given by Prof. Peter Widmayer at ETH Zürich. If you want to see more of them (in German), you can find my other notes from that course here. But now, let’s get started!

What is an algorithm?

In the newspapers, algorithms usually are the evil machines that recommend us red pencils because we recently bought blue ones. But as computer scientists, we mean something else: systematic instructions to solve a problem.

An example that you all know: let us multiply two numbers like we learned it in primary school, digit by didigt: If we want to compute 6262 times 3737, we start with 272 \cdot 7. We write that right-aligned on a sheet of paper. We continue with 676 \cdot 7 and write that, shifted one to the left, one row below. If we do this also for 232 \cdot 3 (shifted by one digit) and 636 \cdot 3 (shifted by two digits), we get a list of four numbers which we can add up to get the final result:

   62*37
--------
      14
     42
      6
    18
--------
    2294

The general approach of how we can multiply two numbers like this is an algorithm, so an instruction manual for how to get from the input (two numbers) to the desired output (their product) and what exactly we have to do step by step. It is not enough to just go through an example, as we want a general method that works for multiplying any two numbers.

At the olympiad in informatics, we often study the properties of such algorithms. Is this multiplication method really correct? Is it clever? Fast? Memory efficient? Then these questions are at the heart of computer science. The Association for Computing Machinery, short ACM, (the world’s biggest association of computer scientists) gives this definition:

What is computer science?

“Computer science is the systematic study of algorithms and data structures, more precisely:

  • their formal properties
  • their linguistic and mechanical realisation
  • their applications

The formal properties include correctness and efficiency of an algorithm, which we start discussing here. The realisation is for us the programming part, which we cover in several other introductory chapters. The applications are finally what our tasks are all about and what we will cover in the more advanced topics.

As a first property of algorithms, we often just want to distinguish between solvable and unsolvable. An example:

The Vacuum Cleaner Problem

We know the outline of our living room and we want to place power plugs in our room such that we can clean all of it with a vacuum clenar that has a power cord of a given length. How many power plugs do we need for this? How do we place them optimally? These are typical questions that SOI tasks can cover.

Today there are also vacuum cleaners that do not need a power cord and drive through your flat like robots. Instead of power plugs these robots need charging stations that they have to reach before their battery is empty. Also this leads to interesting questions: How do we spread out these charging stations optimally? What is a good cleaning route for a robot? The search for an optimal cleaning tour is similar to the following well known problem:

The Travelling Salesperson Problem (TSP)

A salesperson wants to travel through all the cities in a country as quickly as possible. So we search for the shortest tour that visits all cities. This is one of the most famous problems in theoretical computer science and is called Traveling Salesman [1] Problem, short TSP.

One way of solving this problem is just to try out all possible orders of cities. For each permutation, this is how we call the orders in mathematics, of the cities, we compute the costs and we store the minimum. So we already found a first algorithm for TSP. But is it a good or a bad one?

For nn cities, that are all connected with each other, we check n(n1)(n2)4321n \cdot (n-1) \cdot (n-2) \cdots 4 \cdot 3 \cdot 2 \cdot 1 orders this way. This number is called nn-factorial and is written as “n!n!”.

How large can our problem be so that this algorithm can still solve it? 100100 cities? For this we would need to try 100!100! permutations. But that are more than 21002^{100} many. Is this a lot? Imagine a piece of paper. Fold it 100100 times, then the resulting pile is as thick as 21002^{100} layers of paper. This is just about as much as the size of the known universe! [2]

Interestingly no significantly better algorithms for TSP are known today. In 1970, one could solve the problem for 120 cities with a slightly better algorithm. With the same algorithm one can solve 140 cities today, just by using faster computers. So only waiting until the computers get faster will not bring us far.

Meanwhile, there are also better algorithms for certain types of graphs, so that we can also find tours for hundreds of thousands of cities. So an algorithmic breakthrough can really lead to an improved efficiency. How does one invent a faster algorithm?

Algorithm Design

The design and analysis of an algorithm are often intertwined. Sometimes one needs a clever insight, more often the design of algorithms is a very systematic, comprehensible thing. An example for this:

Example: Find the Star

Imagine you are sitting in an SOI workshop and are waiting until Niklaus Wirth enters the room. Who is Niklaus Wirth? Niklaus Wirth is the only Swiss winner of the Turing award (the nobel prize of computer science) and the inventor of the programming language Pascal.

Is Niklaus Wirth already in the room? Who even recognises him? You should all know him. [3] So we want to find Niklaus Wirth or another star among us. What is a star? How de we find one?

Definition: Star
  • everyone knows him/her
  • he/she knows nobody

As the basic operation, we consider a question to Person A of the form “Do you know person B?”. The answer is either Yes or No. Other questions are not allowed. How do we ask such questsions in a room of nn persons most effectively?

  • Can it be that there is no star? Yes, for instance if everyone knows everyone.
  • Can it be that there is exactly one star? Yes, for instance if Niklaus Wirth is here, because he probably does not know us.
  • Can it be that there are several stars? No! See footnote [^ZweiStars] to see why not. [4]

We want to minimize the number of questions.

A naive approach: Asking everyone about everybody else takes n(n1)n \cdot (n-1) questions. We fill the complete table with the exception of the diagonal (everyone knows himself):

A  B Anna Bettina Carla
Anna
Yes No
Bettina No
No
Carla Yes Yes

How do we know find the star in this table? We are looking for a person whose column only contains Yes (everyone knows her/him) and whose row only contains No (she/he does not know anyone). So in the example above, Bettina is the star.

So if we ask all n(n1)=n2nn(n-1)=n^2-n possible questions then we can decide if there is a star and if so find her/him. When designing algorithms, we always want to be as efficient as possible, so minimize the number of questions that we ask. So is it possible to find a star or say with certainty that there is none without asking all possible questions? How many questions do we need at least? Let Q(n)Q(n) be the number of questions that we ask for nn persons. So far, we have Q(n)=n2nQ(n)=n^2-n. For n=1000n=1000, we ask 999000999'000 questions. We now show how we can reduce this to just 29962996 questions.

We use induction for this: Induction means nothing else than starting with few people at first and then considering the others one by one. The easiest cases are n=1n=1 with Q(1)=0Q(1)=0 (a single person is always a star) and n=2n=2. With only two people in the room, we can always solve the task with two questions: We write Q(2)=2Q(2) = 2. For more than two: We send someone out of the room to wait in the hallway, determine the potential star among the remaining people and then fetch back the person in the hallway. For this extra person we have to check whether she/he is a star or confirm that the star from before is still a star. This takes up to 2(n1)2(n-1) questions. This, we can write as:

Q(n)=2(n1)+Q(n1)=2(n1)+2(n2)+...+2=n(n1)Q(n) = 2(n-1)+ Q(n-1) = 2(n-1) + 2(n-2) +...+2 = n(n-1)

Unfortunately, we did not save anything and still end up asking almost n2n^2 questions. But we have applied a new concept: we use the same method on a subset of people over and over again. This is called recursion (from the latin “going back”).

Why do we not save any questions? As it could be that the person that we send to the hallway is the star, we need many questions when that person reenters. Can we guarantee that we do not send out the star? Here comes the trick: If we ask A for B: If A knows B, then A is not a star. If A does not know B, then B is no star. In both cases, we find one person that is not a star and can send her/him to the hallway. (Whether or not the remainig person is a star is not clear yet.) So after just one question, we can always send a non-star to the hallway. We can repeat this “send out the non-star” procedure until only one person remains. This last person we have to question more thoroughly as she could be a star but does not have to be one. To this end, we fetch one person after the other from the hallway and ask her if she knows the star candidate and vice versa. For each person reentering, we can verify the potential star with just two questions. That’s cool right?

To compute the precise number of questions Q(n)Q(n) for nn persons, we need a bit of calculation. If you have not seen this in your math class yet, no problem. You can just remember that 3n3n questions are enough: for each person other than the star, we spend three questions, one when sending her out and two when bringing her back in. Thus for 10001000 people, 30003000 questions suffice. If you are super interested, you can find the mathematical proof in the appendix.

We summarize: At first we needed n(n1)n\cdot(n-1) questions. But after we looked at it closely, we figured out a much much faster method. This new method is so fast that it only asks a small fraction of all possible questions.

Challenge task: Can you do even better?

Cost model

For problems that we want to solve with a computer, the elementary operation is often not “asking a question” but rather a things like reading a number, comparing two numbers, assigning values etc. These are all simple operations. Further into the details, into the level of bits and bytes, we often do not want to go at our olympiad. How long an addition of two (small) numbers really takes depends on many things: the clock rate of your CPU, the usage of the cache, the choice of programming language etc. For simplicity, we assume that each operation takes a single identical step and we call this the unit cost model. So as a first idea, we want to ignore constant factors between the operations.

This is in contrast to our example at the beginning, where the multiplication of two numbers resulted in several operations. For numbers of arbitrary size (often called BigInts) it is no longer realistic to assume that they can be multiplied in a single step. There, it reasonable to look at the individua ldigits and the operations with them. “Usually”, the numbers that we work with are of “reasonable” size, e.g. at most a few billions. On modern CPUs, such numbers really can be multiplied in a single step, so the unit cost assumption is not too far fetched. If we don’t say anything else, we always use the unit cost model at SOI. So we assume that the effort required to perform arithmetical operations is independent of the size of the operands.

The second idea is that we are only interested in large inputs, where the algorithm runs for a long time. Because for small inputs, also a naive algorithm is sufficiently fast.

Whenever we compare two algorithms then the one that runs faster on large inputs we consider as being the better one. To make it mathematically precise, we make some simplifications: constant factors and offsets in the cost function we want to ignore. A linear growth curve with slope 11 we deem equally well as a linear growth curve with slope 2020. With that we want to compensate the hiding of constant factors that we already brought onto ourselves with the unit cost model.

Intuitive O-Notation

What does this mean concretely? For a program that performs 5n3+2n2+8n+505n^3 + 2n^2 + 8n + 50 steps for an input of size nn, we look at this polynomial and but only care about the summand with the highest power, so the 5n35n^3, as everything else is really really small in comparison for large nn. In addition, we ignore constant factors so we simplify 5n35n^3 to n3n^3. We can write this in shorthand using the O\mathcal{O}-Notation:

We write

3n3+2n2+8n+50O(n3)3n^3 + 2n^2 + 8n + 50 \in \mathcal{O}(n^3)

and read this as

The program with 3n3+2n2+8n+503n^3 + 2n^2 + 8n + 50 steps runs asymptotically at most as slow as a program with n3n^3 steps.

For polynomials, we only care about the highest power. Frequent other functions that you will enounter: Logarithms are faster (= result in faster programs = grow slower) then square roots, square roots are faster than polynomials and polynomials are faster than exponential functions.

Formal O-Notation

This “Oh-Notation” can also be made mathematically formal. If you don’t care about this, just jump over this section.

For this we define a set of functions, that also does not grow faster than a growth function g:NNg: \mathbb{N} \rightarrow \mathbb{N}. We write this with a big, calligraphic ‘O’, as

O(g)={f:NN,c1,c2:n:f(n)c1+c2g(n)}\mathcal{O}(g) = \{f: \mathbb{N} \rightarrow \mathbb{N}, \exists c_1, c_2: \forall n: f(n) \leq c_1 + c_2 \cdot g(n) \}.

We say

All functions ff in O(g)\mathcal{O}(g) do not grow asymptotically faster than gg.

We can use this to simplify terms like 3n4O(n)3n-4 \in \mathcal{O}(n) by setting e.g. c1=0c_1 = 0 and c2=3c_2 = 3. Usually, we are happy to just say that

The search for a start takes O(n)\mathcal{O}(n) questions.

or

We find the star in linear time.

The precise number of questions (the Q(n)=3n4Q(n) = 3n-4 that we showed above) often does not even interest us.

The two constants in the definition allow us to scale and shift one function against the other one.

As one could not easily typeset the \in symbol in the beginning, one often wrote 3n4=O(n)3n-4 = \mathcal{O}(n), but correct is the notation with \in, as 3n43n-4 is one of many functions that are contained in the set O(n)\mathcal{O}(n).

Applications of the O-Notation

Now we can do the same for “growing at least as fast” as class Ω(g)\Omega(g). Or for “strictly less fast” with small-O, so o(g)o(g).

If the function ff is both in O(g)\mathcal{O}(g) and Ω(g)\Omega(g), we say that ff and gg grow asymptotically equally fast and write fΘ(g)f \in \Theta(g). For example: 3n4Θ(n)3n-4 \in \Theta(n).

One has to get used to this asymptotic notation for a bit. In algorithm design it is everywhere, so we will see it in every SOI topic again. But don’t get frightened by it. Most things that you will see, are those:

  • O(n)\mathcal{O}(n): linear running time (e.g. for the fast star finding algorithm)
  • O(n2)\mathcal{O}(n^2): quadratic running time (e.g. for the star finding algorithm that asks all possible questions)
  • O(logn)\mathcal{O}(\log n): logarithmic running time (e.g. for binary search)
  • O(nlogn)\mathcal{O}(n \log n): slightly super-linear running time (e.g. for sorting algorithms)

Where are we now?

We have seen how we want to design and analyse algorithms. We have a model of analysis that is quite rough: we consider an algorithm that runs in 2n2n steps as fast as one that takes 10000n+10610'000n+10^6 steps. The is very imprecise, but the general success of algorithm design relies on the fact that this analysis usually works very well. And if you choose the input size large enough (and today’s inputs are often huge), then O(n)\mathcal{O}(n) is always faster than O(n2)\mathcal{O}(n^2), no matter how large the hidden constant is.

Next, we want to again design some “good” algorithms for yet another problem. How do we measure the quality of algorithms?

  • Correctness: Does the program do what it should? We often argue this rather informally.
  • Efficiency: time and space: How long does the algorithm run for and how much memory does it consume?
  • Quality of the solution: how close do we get to the optimum? This is something that we did not consider so far. So here is another problem:

Example Maximum Subarray

Let us consider these n=14n=14 numbers

5 7 12 -48 9 36 -17 22 11 -49 49 -49 111 -117

From these numbers, we want to extract a segment, an intervall, such that the sum of the numbers therein is maximized. For small inputs as the one above, we can easily try out all possible segments, but for n=106n=10^6 this can take a long time.

We can see this as stock prices. The numbers are the daily changes and we ask in hindsight: What would have been the best day to buy and later sell a share so that we can maximize our profit?

First naive solution

Here is a pseudocode implementation of an algorithm that simply tries out everything. Let d[k] deonte the kk-th number in the input, so the stock price change on day kk.

for i = 1 to n do
    for j = i to n do
        S := 0
        for k = i to j do
            S := S + d[k]
        store the maximum S seen so far

We look at every possible interval [i,j][i,j] and compute the corresponding sum SS. How long does this algorithm run?

Upper bound on the running time: Let us count the number of additions generously: each loop is executed at most nn times. As the loops are nested, we take the product, so nnn=n3n\cdot n\cdot n = n^3 in O(n3)\mathcal{O}(n^3).

But is this algorithm really that slow or did we just bound way too roughly? Let us also bound the running time conservatively, so counting rather too few additions. For this, let us assume that the start of the interval ii is only in the first third of the array and that the end of the interval jj is only in the last third of the array. For each such pair i,ji,j, it holds that kk has to go over at least n3\frac{n}{3} values. In total, even under these assumptions, we still perform at least n327\frac{n^3}{27} steps which is in Ω(n3)\Omega(n^3).

As we got the same cubic upper and lower bound, we can thus say that the running time of our algorithm is in Θ(n3)\Theta(n^3). This is not especially quick and only suffices to work through a couple hundred days of the stock exchange.

So we got a program that is correct (in the sense that it computes the optimal solution), but we would like to get a faster one. We have to ask ourselves the question: What do we do that is too much? Do we computee something that we would not need to or do we compute the same thing over and over again? If we watch our algorithm “at work” we can see that it repeats itself. We sum up the numbers from ii to jj and in the next round from ii to j+1j+1. So in doing that we sum up almost the same set of numbers again, just one more number at the end. Couldn’t we compute the sum of numbers from ii to jj faster somehow?

Solution with Prefix Sums

It suffices to compute all the prefix sums: A prefix is an initial part of a sequence. The prefix of length ii of our input are simply the first ii numbers.

For each position ii, we compute the sum SiS_i as the sum of numbers up to position ii. All these values SiS_i, we can compute in linear time, so in O(n)\mathcal{O}(n), increasingly in ii: To compute SiS_i, we just have to add d[i]d[i] to Si1S_{i-1}. After that, the sum of numbers from ii to jj is simply SjSi1S_{j} - S_{i-1}.

We can now get rid off the innermost loop:

// Compute the prefix sums in O(n)
S[0] := 0
for i := 1 to n do
    S[i] := S[i-1]+d[i]
// Take the sum of each interval in O(n^2)
for i := 1 to n do
    for j := i to n do
        S := S[j]-S[i-1]
        merke maximales S

How much faster is this? The precomputation of the prefix sums is in nn steps. After that, we can compute the sum of each of the O(n2)\mathcal{O}(n^2) intervals in constant time. Hence we get O(n2+n)=O(n2)\mathcal{O}(n^2 + n) = \mathcal{O}(n^2) steps overall.

Divide-and-Conquer

Are we happy now? O(n2)\mathcal{O}(n^2) can still be too slow for many real inputs. But contrary to before, this solution does not have an obvious speed bottleneck that we could attack.

To get faster, we want to look at another trick of algorithm design: divide and conquer. It is a powerful technique, that already the ancient Romans knew (latin: divide et impera) – although not in an algorithmic context but rather related to strong foreign policy.

As with the star, we try an inductive approach. But here we will not just take out one element (like with the star). Instead, we will split in the middle:

We split our sequence into two parts. Where can the solution, i.e. the interval with the biggest sum, be? We distinguish three cases: Either completely in the left half, completely in the right half or it contains part of the left and the right half. Exactly these three cases we want to consider and to compute the maximum interval for each case. Then we just have to pick the best of those three intervals.

Such a strategy, that splits the problem into smaller portions, solves each of them separately and then recombines them, is called “divide and conquer”. Did this split help us?

The first two cases are easy to cover: We compute the solution recursively (with the same approach) on both halves separately. But how can we compute the solution for the third case?

We search in the left half for the best piece that ends at the right border, and we combine it with the best piece from the right half that ends at the left border. Hence we have to consider the prefix sums of the right half and the suffix sums of the left half (A suffix is just piece from the the tail, so a prefix “from the other side). We already saw, that we can compute them in linear time. We the best prefix from the right to the best suffix from the left. So we can cover the third case in linear time.

Divide-and-Conquer algorithm:
  • If there is at most one number:
    • If exactly one positive number: take this number
    • sonst: 0
  • If there are several numbers
    • divide in the middle
    • determine the optimum solution separately in both parts
    • determine the best border pieces on the left and right
    • take the best of the three potential solutions

Analysis: Now we want to analyze the running time of our algorithm to see if we improved anything at all. This is a bit of a lengthy calculation - if you want, you are happy to skip it and directly jump to the linear solution. [5]

We write T(n)T(n) for the time with nn elements. For the case where we compute (potentially multiple) partial solutions, we can write the running time as

T(n)=split + left solution + right solution + determine border pieces=1+T(n2)+T(n2)+an2T(n2)+an.\begin{align*} T(n) &= \text{split + left solution + right solution + determine border pieces}\\ &= 1 + T\left(\frac{n}{2}\right) + T\left(\frac{n}{2}\right) + a' \cdot n\\ &\leq 2 \cdot T\left(\frac{n}{2}\right) + a \cdot n. \end{align*}

Let aa' be a constant that bounds the effort to determine the solution across the middle, a=a+1a = a'+1 and for the base case with n=1n=1 let T(1)=bT(1)=b for some constant bb.

How does one solve such a recursion? We do what we did before. We insert a couple of times and see if we get a feeling for the solution.

T(n)=2T(n2)+an=2(2T(n4)+an2)+an=4T(n4)+2an=2(2(2T(n8)+an4)+an2)+an=8T(n8)+3an...=2iT(n2i)+ian\begin{align*} T(n) &= 2 \cdot T\left(\frac{n}{2}\right) + a \cdot n \\ &= 2 \cdot \left(2 \cdot T\left(\frac{n}{4}\right)+a \cdot \frac{n}{2} \right) + a \cdot n = 4 T\left(\frac{n}{4}\right) + 2an\\ &= 2 \cdot \left(2 \cdot \left(2 \cdot T\left(\frac{n}{8}\right)+a \cdot \frac{n}{4}\right)+a \cdot \frac{n}{2}\right) + a \cdot n = 8 T\left(\frac{n}{8}\right) + 3an\\ &...\\ &= 2^i T\left(\frac{n}{2^i}\right) + i \cdot a \cdot n \end{align*}

This stops if n=2in=2^i or equivalently if i=log2ni = \log_2 n, because then T(1)T(1) and the recursion stops. So we suspect something like T(n)=nb+anlognT(n) = n\cdot b + a\cdot n\cdot \log n, which would mean T(n)O(nlogn)T(n) \in \mathcal{O}(n \log n).

But is this really true? To formally prove it, we do not want to use O\mathcal{O}-notation because hiding constants in a recurrence can be dangerous.

Proof by induction: To prove: There are c1c_1 and c2c_2, such that T(n)c1nlogn+c2T(n) \leq c_1 n \log n + c_2

Induction base: To get T(1)c2T(1) \leq c_2, pick c2bc_2 \geq b.

Induction hypothesis (IH): For T(n2)T\left(\frac{n}{2}\right) we already know that T(n2)c1n2log(n2)+c2T\left(\frac{n}{2}\right) \leq c_1 \frac{n}{2} \log \left(\frac{n}{2}\right) + c_2.

Inductions step:

T(n)2T(n2)+anIH2(c1n2log(n2)+c2)+an=c1nlog(n)c1n+2c2+an!c1nlog(n)+c2c1n+c2+an!0\begin{align*} T(n) &\leq 2\cdot T\left(\frac{n}{2}\right) + a\cdot n \\ &\stackrel{\text{IH}}{\leq} 2\cdot \left(c_1\cdot \frac{n}{2}\cdot \log\left(\frac{n}{2}\right) + c_2\right) + a\cdot n\\ &= c_1\cdot n\cdot \log(n) - c_1 n + 2 c_2 + an \\ &\stackrel{!}{\leq} c_1\cdot n\cdot \log(n) + c_2 \underbrace{-c_1 n + c_2 + an}_{\stackrel{!}{\leq} 0} \end{align*}

Can we make the last inequality hold? We have some room left to play, as we did not fix c1c_1 yet. Which condition does c1c_1 need to fulfill?

c1n+c2+an0(ac1)c2nc2n+ac1\begin{align*} -c_1 n + c_2 + an &\leq 0\\ (a-c_1) &\leq - \frac{c_2}{n}\\ \frac{c_2}{n} + a &\leq c_1 \end{align*}

Pick c1=a+c2c_1 = a+c_2 for example. What this shows is that our Divide-and-Conquer algorithm runs in time O(nlogn)\mathcal{O}(n\log n).

Linear Solution

Can we do even better? Let us try from the beginning and try again with an induction from left to right.

As a condition that should always stay true, we want to assume that after step ii, we know the optimum solution for the first ii elements. If we now add element (i+1)(i+1) we want to quickly figure out if this new element belongs to the best interval or not. For this, we do not only compute the maximum up to ii but also how much we can get if we end at the right border of the current prefix, so at element ii. Using this “border maximum”, we can now quickly compare whether we can get a new maximum with the new element or not.

For this we need that if we got up to position ii, we computed already these things:

  • the maximum in the range 1,,i1, \dots, i,
  • the maximum, that ends at position ii.
bordermaxmax := 0
max := 0
for i = 1 to n do
    bordermax := bordermax + d[i]
    if bordermax < 0 then bordermax := 0
    if max < bordermax then max := bordermax

This approach is super simple to implement and has even just linear running time! So O(n)\mathcal{O}(n), perfect!

Lower bound

Now we get ambitious and want to do even better. Can we do O(n)\mathcal{O}(\sqrt n) or even O(log(n))\mathcal{O}(\log(n))? Or is no further improvement possible?

The following consideration shows that we can not be better than linear: Assume we have an algorithm that does not look at all the elements. Then this algorithm can not be correct:

  • If the algorithm computes a range that contains the not-looked-at element, then let it be -\infty. We would better not take anything.
  • If the solution does not contain the not-looked-at element, then let it be \infty, so that it really should be in the solution.

Hence every correct algorithm has to look at every element at least once. The running time is thus in Ω(n)\Omega(n). This is not a property of a specific algorithm, but for all algorithms that solve the maximum subarray problem. We call this the runtime complexity of the problem

However, the size of the input is not always a lower bound for the runtime. Remember the star problem. The input was quadratic (all pairs), but we only had to ask linearly many questions.

Here some extra questions (with their answers):

  • How do we find the interval (and not just its sum)? Answer: During the algorithm, we keep track of the bounds for max and bordermax and update them upon each change.
  • What is the memory usage? Answer: We distinguish two types of memory: Memory used by the program (including the input) and how much memory we use additionally during the computation. This way, the first and last algorithm only require constant additional memory for the values S, bordermax and max. But as soon as we for instance compute all the prefix sums, like in the second algorithm, we need linear, so O(n)\mathcal{O}(n), additional memory. Bonus question: How much additional memory does the divide-and-conquer algorithm need? [6]

So this is it. You reached the end of our crash course in algorithm design and analysis. If you have any questions, please let me know (daniel@soi.ch). Otherwise have fun with the other sections of our tutorials.

Appendix

Here we give a few proofs that are not really important for understanding the rest of this chapter.

Induction for the Star task

We can write the function Q(n)Q(n) as follows:

Q(n)={2for n=21+F(n1)+2for n>2Q(n) = \begin{cases} 2 & \text{for } n=2\\ 1 + F(n-1) + 2 & \text{for } n>2 \end{cases}

The two cases distinguish whether we still send someone out or not. Now we telescope, meaning that we plug in the formula a couple of times and see what we get:

Q(n)=3+Q(n1)=3+3+Q(n2)=3+3+3+Q(n3)=3++3n2 times+Q(2)=2=3(n2)+2=3n4\begin{align*} Q(n) &= 3 + Q(n-1) = 3 + 3 + Q(n-2) = 3+3+3+Q(n-3)\\ &= \underbrace{3+\dots+3}_{\text{$n-2$ times}}+\underbrace{Q(2)}_{=2} \\ &= 3(n-2) + 2 = 3n-4 \end{align*}

To formally prove that Q(n)=3n4Q(n) = 3n - 4, we prove it by induction:

  • Hypothesis: Q(n1)Q(n-1) = 3(n1)43(n-1)-4
  • Base of the induction: Q(2)Q(2) = 22
  • Step of the indution: Q(n)=3+Q(n1)=3+3(n1)4=3n4Q(n) = 3 + Q(n-1) = 3 + 3(n-1) -4 = 3n-4
[1]Or politically correct: Traveling Salesperson Problem
[2]The size of the known universe is about 2802^{80} kilometers (Source: Wolfram Alpha). The world record for folding paper is at 1313 times. In 2002, the high school student Britney Gallivan has folded a piece of paper twelve times. Even for this she needed a roll of toilet paper of length more than one kilometer [Details]. Since then several attempts were made to break this record. 2012 a group of students at MIT managed to do 1313 folds [Artikel] and also a team of BBC has tried [Video].
[3]More about him on Wikipedia.
[4]If there were two stars one would need to know the other so that the other is known by everyone and can be a star. But then the first star does not know nobody and hence, by definition, is not a star because he knows the other star. Confused?
[5]Spoiler: We will get O(nlogn)\mathcal{O}(n \log n).
[6]Implemented cleverly, the recursive call can be done with constant (O(1)\mathcal{O}(1)) additional memory and the depth of the recursion is limited to logn\log n, hence the additional memory needed is O(logn)\mathcal{O}(\log n).