# How to Get Binary Search Right

## Contents

As you already know, binary search is an O(log N) way to search if a particular value exists in a sorted array. There are some reports on how "most of our implementation" of binary searches are broken. The mistake discussed in that post is the overflow when calculating the index of the middle element. But regardless of whether you get this bit right, there's a different problem with all binary search implementations I've seen.

They're just too damn complicated and hard to get right out of your memory. Which is exactly what you want, say, at a job interview.

## Recap

Let's recap what binary search algorithm is, at a high level. We are given a**sorted**array, and a value we're looking for. Is the value in the array? Here's the basic recursive algorithm:

- Check if the middle element is the value? Yes--good, found it.
- Is it smaller than what we're looking for? Yes--repeat the search in the array between the middle and the end of the original array.
- So it must be larger then--repeat the search in the array between the beginning and the middle of the original array.

## Key points

Almost always, if you're relying on your memory, you should prefer simple algorithms to complex, like you should implement a treap instead of an AVL tree. So here's a sequence of steps that helps me to write binary search correctly.

**Simplicity**: solve "bisection" rather than "binary search"**Consistency**: use consistent definition of search bounds**Overflows**: avoid overflows for integer operations**Termination**: ensure termination

### 1. **Simplicity**: Solve "Bisection" rather than "Binary Search"

From now on, let's redefine the problem. We aren't searching for anything anymore. "Searching" for an element in array is a bit hard to define. Say, what should "searching" return if the element doesn't exist, or there are several equal elements? That's a bit too confusing. Instead, let's *bisect* an array, pose a problem that always have a single solution no matter what.

Bisection is defined as follows: **given a boolean function f, find the
smallest k ≥ 0 such that, for all i < k, f(a[i]) == false, and for all j >= k, f(a[j] == true)**. In other words, we are to find the smallest k such that for all i within array bounds f(a[i]) == (i≥k). If the array is sorted in such a way that f(a[i]) is monotonic, being first false and then true, this problem always has a single solution. In some cases, when f(a[i]) is false for all array elements, the resultant k is greater by one than the last index of the array, but this is still not a special case because this is exactly the answer as defined by bold text above.

This is more powerful than the binary "search", which can be reduced to "bisection" as follows. In order to find if an element x exists in the array a of length N, we can do the following:

Now let's proceed with solving the bisection, now that we've defined it.

### 2. **Consistency**: use consistent definition of search bounds

Let's consider the array zero-indexed with length `N`, so the indexes range from `0` to `N-1`. The answer to our bisection problem is between `0` and `N` inclusively. Yes, `N` would be outside of the array bounds, but the answer to bisection can indeed be outside of the bounds if the entire array is smaller than the element we're searching for!)

We start by assigning i to 0, and j to N. No special cases allowed: we can do without them at all here. So at each step, we will check the value at index `k`, where i <= k < j. Inspecting this value will allow to reduce the range size.

### 3. **Overflows**: Avoid Overflows for Integer Operations

We literally have 1 integer operation here: given the bounds, compute the pivot point. i.e. the point at the middle of the array we'll apply f to to make a step. How hard can that be to compute **the middle of an interval** right?

Turns out, quite hard. This problem is discussed in the article I linked above more. If we lived in the world of unbounded integers, it would simply be (i+j)/2, but we live in the world of integer overflows. If i and j are positive integers, which makes sense for array indexes in C++, the never-overflowing expression would be:

Another useful property of this expression (in C++, at least) is that it's never equal to j if i < j. Here's why it's important.

### 4. **Termination**: ensure termination

A basic way to make sure that your program terminates some day is to construct such a "loop variant" that *strictly* decreases with each loop iteration, and which can't decrease forever. For our bisection, the natural variant is the length of the part of the original array we currently look for the answer in, which is max(j-i, 0). It can't decrease forever because its value would reach zero at some point, from which there's nowhere to proceed. We only need to prove that this variant decreases each iteration by at least one.

Fun fact: special programs can prove that a program terminates automatically, without human intervention! One of these programs is named... Terminator. You can read more about this in a very accessible article "Proving Program Termination" by Byron Cook et al. I first learned about it in a summer school where Mr. Cook himself gave a talk about this.

First step is to make sure that the time it reaches zero would be the last iteration of the loop (otherwise it wouldn't decrease!), so we use while (i < j).

Now assume that we've understood that the pivot point s does not satisfy the target function f. Our key difference with most binary search algorithms is that **we should assign the new i to s+1 rather than s**. Given our definition of the problem, s is *never* the answer in this case, and we should keep our candidate range as small as possible. If f(a[s]) is true, then s could still be the answer, so it does not apply here. Here's our iteration step now:

We mentioned in the previous section that i <= s < j, hence i < s + 1 and s < j, so each of assignments decreases j-i by at least 1 regardless of f's value. This proves that this loop never hangs up forever.

### The complete program

So here's our bisection that works for all sorted arrays (in the sense of f(a[i]) being first false then true as i increases).

The search itself in terms of finding if a value exists in a sorted array, would be then written (I'm just repeating what I wrote above, no surprises here):

### Proof By Authority

And GNU C++ STL implementation of binary_search follows the same guidelines I posted here. In my version of GCC 4.3.4 binary search looks like this:

Where lower_bound is the bisection function.

### Conclusion

Here's my version of bisection and binary search I usually use when I need one. It is designed to be simple, and simple things are easier to get right. Just make sure your loop variant decreases, you do not overflow, and you get rid of as much special cases as possible.

# Comments imported from the old website

The binary search is a little bit more complicated than usually presented. An exercise that can help a lot in writing a correct version of binary search is given here http://arxiv.org/abs/1402.4843 I highly recommend looking at it. Without it, it looks like guessing game. If we exclude recursive solution, there are no more than 4-5 correct implementations of the algorithm. All solutions including the one you gave can be derived from the table given in exercises. (Actually I am collecting implementations and showing how to do that.) Your example is optimized so it is not a good starting point. j in your code is keeping the index which is one after the last element in the selected subarray. You need to explain this explicitely. GNU code is optimized for several reasons, one of them is speeding up on assembly level. I am saying this is an advanced level what you are writing about. Nobody can use it as a starting point.

Many thanks to Martin Ward who pointed to inaccuracies in termination proof description.