Big-O表示法O(n)
和Little-O表示法有什么区别o(n)
?
5 回答
f ∈ O(g) 说,本质上
对于常数k > 0的至少一个选择,您可以找到一个常数a,使得不等式 0 <= f(x) <= kg(x) 对于所有 x > a 都成立。
请注意,O(g) 是该条件成立的所有函数的集合。
f ∈ o(g) 说,本质上
对于常数k > 0 的每一个选择,您都可以找到一个常数a,使得不等式 0 <= f(x) < kg(x) 对于所有 x > a 都成立。
再次注意,o(g) 是一个集合。
在 Big-O 中,您只需要找到一个特定的乘数k,对于该乘数 k ,不等式超过某个最小值x。
在 Little-o 中,必须有一个最小值x,只要它不是负数或零,则无论您使k多么小,不等式都在该最小值之后成立。
这些都描述了上限,虽然有点违反直觉,但 Little-o 是更强的陈述。如果 f ∈ o(g) 与 f ∈ O(g) 相比,f 和 g 的增长率之间的差距要大得多。
差异的一个例子是:f ∈ O(f) 为真,但 f ∈ o(f) 为假。因此,Big-O 可以理解为“f ∈ O(g) 意味着 f 的渐近增长不快于 g's”,而“f ∈ o(g) 意味着 f 的渐近增长严格慢于 g's”。这就像<=
对<
。
更具体地说,如果 g(x) 的值是 f(x) 值的常数倍,则 f ∈ O(g) 为真。这就是为什么在使用 big-O 表示法时可以删除常量的原因。
但是,要使 f ∈ o(g) 为真,那么 g 必须在其公式中包含更高的 x幂,因此 f(x) 和 g(x) 之间的相对分离实际上必须随着 x 变大而变大。
要使用纯数学示例(而不是指算法):
以下对于 Big-O 是正确的,但如果您使用 little-o,则不正确:
- x² ∈ O(x²)
- x² ∈ O(x² + x)
- x² ∈ O(200 * x²)
以下情况适用于 little-o:
- x² ∈ o(x³)
- x² ∈ o(x!)
- ln(x) ∈ o(x)
请注意,如果 f ∈ o(g),这意味着 f ∈ O(g)。例如 x² ∈ o(x³) 所以 x² ∈ O(x³) 也是正确的,(再次认为 O as<=
和 o as <
)
I find that when I can't conceptually grasp something, thinking about why one would use X is helpful to understand X. (Not to say you haven't tried that, I'm just setting the stage.)
Stuff you know: A common way to classify algorithms is by runtime, and by citing the big-Oh complexity of an algorithm, you can get a pretty good estimation of which one is "better" -- whichever has the "smallest" function in the O! Even in the real world, O(N) is "better" than O(N²), barring silly things like super-massive constants and the like.
Let's say there's some algorithm that runs in O(N). Pretty good, huh? But let's say you (you brilliant person, you) come up with an algorithm that runs in O(N⁄<sub>loglogloglogN). YAY! Its faster! But you'd feel silly writing that over and over again when you're writing your thesis. So you write it once, and you can say "In this paper, I have proven that algorithm X, previously computable in time O(N), is in fact computable in o(n)."
Thus, everyone knows that your algorithm is faster --- by how much is unclear, but they know its faster. Theoretically. :)
In general
Asymptotic notation is something you can understand as: how do functions compare when zooming out? (A good way to test this is simply to use a tool like Desmos and play with your mouse wheel). In particular:
f(n) ∈ o(n)
means: at some point, the more you zoom out, the moref(n)
will be dominated byn
(it will progressively diverge from it).g(n) ∈ Θ(n)
means: at some point, zooming out will not change howg(n)
compare ton
(if we remove ticks from the axis you couldn't tell the zoom level).
Finally h(n) ∈ O(n)
means that function h
can be in either of these two categories. It can either look a lot like n
or it could be smaller and smaller than n
when n
increases. Basically, both f(n)
and g(n)
are also in O(n)
.
I think this Venn diagram (from this course) could help:
In computer science
In computer science, people will usually prove that a given algorithm admits both an upper O
and a lower bound . When both bounds meet that means that we found an asymptotically optimal algorithm to solve that particular problem
Θ
.
For example, if we prove that the complexity of an algorithm is both in O(n)
and (n)
it implies that its complexity is in Θ(n)
. (That's the definition of Θ
and it more or less translates to "asymptotically equal".) Which also means that no algorithm can solve the given problem in o(n)
. Again, roughly saying "this problem can't be solved in (strictly) less than n
steps".
Usually the o
is used within lower bound proof to show a contradiction. For example:
Suppose algorithm
A
can find the min value in an array of sizen
ino(n)
steps. SinceA ∈ o(n)
it can't see all items from the input. In other words, there is at least one itemx
whichA
never saw. AlgorithmA
can't tell the difference between two similar inputs instances where onlyx
's value changes. Ifx
is the minimum in one of these instances and not in the other, thenA
will fail to find the minimum on (at least) one of these instances. In other words, finding the minimum in an array is in(n)
(no algorithm ino(n)
can solve the problem).
Details about lower/upper bound meanings
An upper bound of O(n)
simply means that even in the worse case, the algorithm will terminate in at most n
steps (ignoring all constant factors, both multiplicative and additive). A lower bound of (n)
is a statement about the problem itself, it says that we built some example(s) where the given problem couldn't be solved by any algorithm in less than n
steps (ignoring multiplicative and additive constants). The number of steps is at most n
and at least n
so this problem complexity is "exactly n
". Instead of saying "ignoring constant multiplicative/additive factor" every time we just write Θ(n)
for short.
The big-O notation has a companion called small-o notation. The big-O notation says the one function is asymptotical no more than
another. To say that one function is asymptotically less than
another, we use small-o notation. The difference between the big-O and small-o notations is analogous to the difference between <= (less than equal) and < (less than).