In mathematics, computer science, and related fields, Big-O notation describes the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. Big-O notation characterizes functions according to their growth rate: different functions with the same growth rate may be represented using the same O notation.
Although developed as a part of pure mathematics, this notation is now frequently also used in the analysis of algorithms to describe an algorithm's usage of computational resources: the worst case or average case running time or memory usage of an algorithm is often expressed as a function of the length of its input using Big-O notation.
Big -O notation has two main areas of application. In mathematics, it is commonly used to describe how closely a finite series approximates a given function, especially in the case of a truncated Taylor series or asymptotic expansion. In computer science, it is useful in the analysis of algorithms. In both applications, the function g(x) appearing within the O(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms.
Big-O Notation and Analysis of Algorithm
In our study of algorithms, nearly every function whose order we are interested in finding is a function that defines the quantity of some resource consumed by a particular algorithm in relationship to the parameters of the algorithm. This function is usually not the same as the algorithm itself, but is a property of the algorithm. Although Big-O notation is a way of describing the order of a function, it is also often meant to represent the time complexity of an algorithm.
Reference:
http://en.wikipedia.org/wiki/Big_O_notation
http://www.eecs.harvard,edu/~ellard/Q-97/HTML/root/nodes8.html
No comments:
Post a Comment