Constant-Time

Let us start our timing experiments with something simple: how long does it take for a computer to add 1 to (that is, increment) a number?

  1. Replace "Hello!" in the timing script with an addition operator from the "Operators" palette, and add 1 to 100,000.
  2. Run the script a few times (around four) to get an idea of the average approximate time (in seconds) it takes for the computer to increment 100,000.
  3. Run the script repeatedly. (Do not place the script inside a repeat or a repeat until block, since we are interested in knowing the approximate time the script takes to run once.)
  4. Talk with Your Partner What's your gut feeling for how much longer the computer would take if we doubled 100,000?
  5. Test your intuition: Create a table of the an average approximate times for how long it takes the computer to increment (add 1 to) numbers that you progressively double:
    • 100,000
    • 200,000
    • 400,000
    • 800,000
    • 1,600,000
    Remember that for each number, you need to run the timing script multiple times (around four) to get an idea of the average approximate time (you don't have to be precise).
  6. Talk with Your Partner What do you observe?
Tough Stuff
  1. Take another huge leap and find out approximately how long it takes for the computer to increment numbers that you progressively scale by 10:
    • 160,000,000
    • 1,600,000,000
    • 16,000,000,000
    What do you observe? How is this similar to or different from scaling by 2?
MF: The section below is too long. Let's trim it down considerably, especially the third paragraph.

In the timing experiments above, you may have noticed that the computer takes approximately the same time to increment a number, even though the number was made progressively larger. This is why, computer scientists call incrementing a number a constant-time operation. It turns out, that any basic arithmetic operation (addition, subtraction, multiplication, division, and exponentiation) is considered to be a constant-time operation.

Notice that we call these operations constant-time operations, but we don't actually say how much time they take because different computers will take different amounts of time to perform the same operation. Instead, we focus on how the running time of an algorithm scales as we scale its inputs to larger and larger sizes because this is a property of the algorithm itself, and it is independent of the computer that it is run on.

Why did the computer take approximately the same amount of time to increment a number, even though that number was getting larger? Think about how you would add one to a number back in your elementary school days; this is similar to how a computer does its arithmetic (ignoring technical details). The elementary school way of adding numbers goes digit by digit, and so the amount of time it takes for you to add two numbers depends on how many digits each number has. As we doubled the number we were incrementing, we didn't consistently add digits to it, and so the computer took approximately the same time. Even as we began scaling the number by ten, the computer (and you!) takes a relatively small amount of time to account for the extra digit, so the total time remains approximately constant.

Constant-time operations are the Holy Grail of computer science algorithms, and unfortunately, most algorithms are not constant-time...