Big O Cheatsheet

This is an extraction of the essential points from this excellent blog series on understanding Big O notation:

  • A way of analyzing the efficiency of algorithms
  • Represents the rate at which time needed increases as amount of input increases
  • "Amount of input" here refers to the literal size of the data input to the function via its arguments
  • O(1) "constant time" means there is no increase in time elapsed as input grows
    • Example: function (a) { return !a } // negation function, an accurate, if not shitty, example
  • O(n) "linear time" each increase of one item equals an increase of one constant time operation
    • Example: a while loop where the condition variable is incremented each loop
    • O(2n) (See caveat point below this)
      • Example: a while loop where the condition variable is incremented twice each loop (two constant time operations per loop)
      • O(2n)—or O(anyInteger n)—is not used because we only care about "orders of magnitude" and this thing is still in the "linear" order of magnitude (just a steeper linear function, see this graph for visceral proof (compare x to 2x))
  • O(n²) "quadratic time" means that time taken increases by n * n (different way of saying n²) as input increases linearly.
    • Made up stupid example: 4 items takes 2 milliseconds, 5 items takes 4 milliseconds, 6 takes 16 milliseconds
  • If the lines plotted by two different functions (O(45) and O(46), for example) never cross from 1 onward, they are equivalent orders
    • Another way of saying this: the lines must "bound" each other, meaning each line is a "boundary" that the other never crosses
    • Excerpt from the article: "You might notice that this definition means that the O(45n) function is also O(n³) because it will also never cross. This is true, but not particularly useful."
  • For extra credit nerding out, check out this table of time complexities with some bitchin' explanation and examples
comments powered by Disqus