# Theoretical-Greedy

Written by Bibin Muttappillil.

What is greedy? The greedy approach is used to solve optimization problems by choosing the locally best option. Best is often defined by the problem statement (e.g. maximizing a sum or minimizing a cost). Locally means that at each decision point (e.g taking, disregarding or choosing) we only consider the current state and choose the option which would give us the best result immediately after this decision.

This approach often makes sense intuitively and can work on small cases, but there are problems in which this approach doesn’t work. So if it works then the argumentation that we have the optimal solution just because we took the best option at each step doesn’t suffice. Usually the idea of the greedy algorithm is simple, but to reason about its correctness is not.

## Why greedy doesn’t work

Let’s look at an example where the greedy approach doesn’t work:
Given are $n$ coins with their respective values $v_{0}, v_{1}, {\dots}, v_{n-1}$. We want to steal the coins but without too much suspicion, so we are not allowed to take coins next to each other.
What is the maximum sum of values we can steal?
A greedy approach: We look at all coins left which still have their neighbors next to them and take the one with the biggest value. We repeat this until there are no available coins left.
Why does this fail? If we look at the example $42, 51, 20$ then the best value we can steal is $42 + 20 = 62$, but our greedy approach would take the coin with the biggest value $51$ and then there wouldn’t be any coins left to steal. So this greedy approach doesn’t work in all cases.

## Why greedy could work

Of course sometimes a greedy algorithm actually solves a problem correctly. The question is how can we see that, and more importantly how can we reason that it actually solves it correctly?
Like said in the Theoretical-intro there is no standard way to reason about things, but there are still strategies which we can use that help us at least:

Strategy a) assume greedy approach is wrong b) look where the optimal solution would differ c) show that we can ‘exchange’ some things to make the greedy algorithm better or equally good as the optimal solution

We can see that the general strategy uses the idea of Contradiction as the points a) and c) contradict each other. So if we manage to show that c) follows if a) holds then we have shown that the greedy algorithm actually solves the problem correctly.
This isn’t that easy, a) and b) are rather straight-forward but the hard part lies in c). Finding this ‘exchange argument’ relies on a understanding of some crucial properties of our problem, which is different from problem to problem and can be hard to find.

We will look at three examples to see how we can use this strategy to reason the correctness of the greedy approach.

// solves the task with a_i as an input array
void solve_concierge(int[] a):
int N = a.size()

// the rooms that are available
set<int> rooms = {1, 2, ..., N}

// participant i will have room room_assignment[i]
int[] room_assignment = {0, ..., 0} // with size = N

for i in {0, 1, ..., N-2}:
// the time we have until the next participant
int time = a[i+1] - a[i]

/*
* We assume that we already proved
* find_room_in_time (basically a binary search).
* find_room_in_time returns the farthest away (available) room
* where the round trip is <= time
* meaning 2*r <= time
*/
int r = find_room_in_time(time/2, room)

// when there is no reachable room available
if r == -1:
print("IMPOSSIBLE")
exit()

assignment[i] = r

rooms.remove(r)

// set of rooms has only one element left
room_assignment[N-1] = rooms[0]

for r in room_assignment:
print(r + " ")


The crucial observation here is to see that we should always take the room with as high of a number as possible but which still allows the concierge to reach the next participant. The intuition is that having rooms with faster access is always better.

With that we can try to reason the correctness:
We assume that our greedy solution is wrong, meaning either
- we said it is "POSSIBLE" while correctly it would be impossible or
- we said it is "IMPOSSIBLE" while there actually exist a solution.
The first part is easier to disprove. Our algorithm found a solution which also fulfills the requirements:
- We are finished bringing the current participants to its room before the next one arrives. This is guaranteed by the function find_room_in_time() in the code, where we definitely take a room to still reach the next participant or we say "IMPOSSIBLE" in which we are in the other case.
So with this argument we can say that if our algorithm says it is "POSSIBLE" then it actually is possible.
Now for the second part. We assume that our algorithm didn’t find a solution while there exists a solution. We can now say that an ‘optimal’ solution is at some point different than our solution (in this case our solution is only partial up to the point where we break the algorithm because we couldn’t reach the next participant). We look at the first point where the solutions differ, meaning that the ‘optimal’ solution gave the $i-th$ participant a different room than our algorithm did. All participants before were given the same room so our greedy and the ‘optimal’ solution have the same set of rooms left to give the $i-th$ participant. Our greedy gives the $r-th$ room which is free and as far away as possible while still guaranteeing reaching the next participant. So the optimal solution can’t choose a room $> r$, as it would be impossible to reach the next participant and also not $r$ since our greedy took it and they differ. So the optimal choose the $s-th$ room where $s < r$.
Now to the interesting part. Exchanging the $s-th$ room with $r-th$ in the optimal solution will create a new optimal solution. This solution still works as we calculated in the greedy algorithm that the concierge has enough time to give the $i-th$ participant room $r$ and giving room $s$ to the participant who had the $r-th$ room before just gives the concierge even more time to serve the next one. The new optimal solution and our greedy now have (at least) all rooms until and including room $i$ in common. Now we have a new optimal solution where we can repeat the same argument*. If we would repeat this argument again and again we would reach a problem at some point:
• either we reach the last room and then our assumption that a optimal solution and the greedy differs is wrong $\Rightarrow$ contradiction
• or we reach a point where our greedy solution didn’t find a room, but then an optimal solution also can’t find a room (as we have the same set of rooms left) $\Rightarrow$ contradiction.
So in both cases we reach a contradiction meaning our assumption in the beginning must be wrong. With this we can say that if our algorithm says it is "IMPOSSIBLE" then it actually is impossible.

(*Note: ‘Repeating the same argument’ is a bit fuzzy. To be more precise we would have to do induction. We left it out for clarity and to be more concise)

This would be a sufficient reasoning for the SOI. QED

// solves the task with a_i, b_i as an input array of pairs
solve_binge( (int, int)[] shows ):
int N = a.size()

sortByEndingTime(shows)

int result = 0
int currentTime = 0

for (start, end) in shows:
// if the show doesn't overlap with the last
if start >= currentTime:
// select (greedily) the current show
currentTime = end
result++

print(result)

It might not be immediately clear why it is better to sort by ending times but this will actually help us a lot with our reasoning. The key observation here is that it is good to choose a show which ends sooner such that we have more time left to watch other shows.
Again we try a proof by contradiction and assume therefore that our greedy is wrong meaning:
- our solution is too large $\Leftrightarrow$ there is no configuration with this number of shows
- our solution is too small $\Leftrightarrow$ there exist a configuration with more shows
The first part is true because are greedy algorithm calculates a configuration by choosing shows which are not overlapping and the counting those shows. So we found a configuration of shows fulfilling the requirements breaking our first assumption.
For the second part we assume there exist an optimal configuration, meaning there is no configuration with more shows than this one, and assume that this show has more shows than our greedy found. We sort both configuration again by ending times and look at the first show where they differ. So at some point we took a show $S$ while the optimal instead of picking $S$ picked $T$ (the show next in the list of shows of the optimal configuration).
We know that the show $T$ has to end later or at the same time as $S$ ($S_b \le T_b$). The reason is that our greedy algorithm took the show which ends as soon as possible and is still valid (so it doesn’t overlap with any other show). Since the optimal configuration has the same set of shows to choose from and also has to have a valid configuration, it can not have picked any show ending sooner.
We can now exchange the shows $S$ and $T$ in the optimal configuration without any problem. The reason is that the optimal configuration has only shows left after $T_b$, so the show $S$ can’t overlap with the other shows. $S$ also doesn’t overlap with the current set of shows, as our greedy has $S$ in it and the same current set as the optimal. So we have found a new configuration which has the same number of shows than the old optimal, but has one more show in common from the start with our greedy configuration. Meaning we can repeat this argument until we reach the end, where the greedy and some optimal configuration have the same number of shows. This contradicts our assumption of our greedy configuration being wrong, meaning there is no configuration with more shows than the one we found.
With that we can conclude that our greedy algorithm solves the problem correctly. QED

There are several ways to attack this problem, one of them is greedy. Be careful though some greedy approaches are right and some are not.

A wrong greedy solution:

// tries to solves the task
// with d_i as an input array and K = 2
wrong_stollencut(int[] d):
int sum = sum(d)

// try to cut the stollen in perfect thirds
// keep rounded up and down solutions
int (a_l, a_r, c_l, c_r) = partition(d)

// calculate all 4 cases:
int case1 = max_diff(a_l, sum - a_l - c_l, c_l)
int case2 = max_diff(a_l, sum - a_l - c_r, c_r)
int case3 = max_diff(a_r, sum - a_r - c_l, c_l)
int case4 = max_diff(a_r, sum - a_r - c_r, c_r)

// take the smallest case
int result = min(case1, case2, case3, case4)

return result

partition(int[] d):
int i = 0
int a_l = 0
int a_r = 0

// a_l <= sum*1/3 < a_r
while a_r <= sum/3:
a_r += d[i]
i++
a_l = a_r-d[i]

int c_l = 0
int c_r = 0

// check c_l <= 2/3*sum < c_r
while 3*c_r <= 2*sum:
c_r += d[i]
i++

c_l = c_r-1

return (a_l, a_r, c_l, c_r)

int max_diff(int a, int b, int c):
return max(a, b, c) - min(a, b, c)


This seems like a reasonable idea but it would solve this example wrong

Input:

3
1 1 10

Output:

Output of wrong_stollencut: 10
Correct output: 9

Comment:

Our algorithm only tries ${ [1, 1], [1, 1, 10] }$ for the left cut and ${ [10], [] }$ for the right cut
A correct partition would be: $1 | 1 | 10$

Such a thing is called a counter example. A correct algorithm doesn’t allow for a counter example. A good reasoning makes it clear why such a counter example is impossible.

A correct greedy solution:

// solves the task
// with d_i as an input array and K = 2
solve_stollencut(int[] d):

int (a_l, x_l, a_r, c_l, c_r) = partition(d)

int case1 = solve_right_bound(d, a_l)
int case2 = solve_right_bounf(d, a_r)
int case3 = solve_left_bound(d, c_l)
int case4 = solve_left_bound(d, c_r)

int result = min(case1, case2, case3, case4)

print(result)

// find the best second bound
// if the first is given
// by iterating through every possibility
int solve_right_bound(int d[], int left):
int N = a.size()
int sum = sum(d)

int a = 0
int i = 0
while a != left
a += d[i]
i++

int result = inf
while i < N
b += d[i]
c = sum - b - a
result = min(result, max_diff(a, b, c))
i++

return result

// solving from right to left
// is the same as reversing it
// and then from left to right
int solve_left_bound(int d[], int right):
int[] reversed_d = reverse(d)

return solve_left_bound(d, d.size()-1-right)


comming soon: a correct reasoning …