Is `log(n)` base 10?

Still getting a grip on logarithms being the opposite of exponentials. (Would it also be correct to describe them as the inversion of exponentials?)

There are lots of great SO entries already on Big-O notation including O(log n) and QuickSort n(log n) specifically. Found some useful graphs as well.

In looking at Divide and Conquer algorithms, I'm coming across n log n, which I think is n multiplied by the value of log n. I often try concrete examples like 100 log 100, to help visualize what's going on in an abstract equation.

Just read that log n assumes base 10. Does n log n translate into:

"the number n multiplied by the amount 10 needs to be raised to the power of in order to equal the number n"?

So 100 log 100 equals 200 because 10 needs to be raised to the power of two to equal 100?

Does the base change as an algorithm iterates through a set? Does the base even matter if we're talking in abstractions anyway?

Jon Skeet
people
quotationmark

You don't need to worry about the base - if you're dealing with algorithmic complexity, it doesn't matter which base you're in, because the difference is just a constant factor.

Fundamentally, you just need to know that log n means that as n increases exponentially, the running time (or space used) increases linearly. For example, if n=10 takes 1 minute, then n=100 would take 2 miuntes, and n=1000 would take 3 minutes - roughly. (It's usually in terms of upper bounds, with smaller factors ignored... but that's the general gist of it.)

n log n is just that log n multiplied by n - so the time or space taken increases "a bit faster than linearly", basically.

people

See more on this question at Stackoverflow