Analyzing Algorithms and problems

Algorithm
     an algorithm to be any well-defined computational procedure that takes some
values as input and produces some values as output. an algorithm provides a step-by-step method for solving a computational problem. Unlike programs, algorithms are not dependent on a particular programming language, machine, system, or compiler. They are mathematical entities. Algorithm design is all about the mathematical theory behind the design of good programs.
Criteria for algorithm
  • correctness “getting correct output”
  • amount of work done
  • mound of space used
  • simplicity and clarity
  • optimality “choose best method for solve problem”

Asymptotic notation
    The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N ={0, 1, 2, ...}. Such notations are convenient for describing the worst-case running-time function T (n), which is usually defined only on integer input sizes. It is sometimes convenient, however, to abuse asymptotic notation in a variety of ways.
O-notation
The Θ-notation asymptotically bounds a function from above and below. When we have only an asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n)) (pronounced "big-oh of g of n" or sometimes just "oh of g of n") the set of functions
O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}.

Ω-notation
Just as O-notation provides an asymptotic upper bound on a function, Ω-notation provides an asymptotic lower bound. For a given function g(n), we denote by Ω(g(n)) (pronounced "big- omega of g of n" or sometimes just "omega of g of n") the set of functions Ω(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0}.

Θ notation
   Definition:A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items. Informally, saying some equation f(n) = Θ (g(n)) means it is within a constant multiple of g(n). The equation is read, "f of n is theta g of n".
Formal Definition: f(n) = Θ (g(n)) means there are positive constants c1, c2, and k, such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ k. The values of c1, c2, and k must be fixed for the function f and must not depend on n.

O-notation (littile oh)
The asymptotic upper bound provided by O-notation may or may not be asymptotically tight. The bound 2n2 = O(n2) is asymptotically tight, but the bound 2n = O(n2) is not. We use o- notation to denote an upper bound that is not asymptotically tight. We formally define o(g(n)) ("little-oh of g of n") as the set
o(g(n)) = {f(n) : for any positive constant c > 0, there exists a constant n0 > 0 such that 0 ≤ f(n) < cg(n) for all n ≥ n0}.
ω-notation
By analogy, ω-notation is to Ω-notation as o-notation is to O-notation. We use ω-notation to denote a lower bound that is not asymptotically tight. One way to define it is by f(n) ω(g(n)) if and only if g(n) o(f(n)). Formally, however, we define ω(g(n)) ("little-omega of g of n") as the set ω(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n0 > 0 such that 0 ≤ cg(n) < f(n) for all n ≥ n0}.

properties of asymptotic notation
Transitivity:
f(n) = Θ(g(n)) and g(n) = Θ(h(n)) imply f(n) = Θ(h(n)),
f(n) = O(g(n)) and g(n) = O(h(n)) imply f(n) = O(h(n)),
f(n) = Ω(g(n)) and g(n) = Ω(h(n)) imply f(n) = Ω(h(n)),
f(n) = o(g(n)) and g(n) = o(h(n)) imply f(n) = o(h(n)),
f(n) = ω(g(n)) and g(n) = ω(h(n)) imply f(n) = ω(h(n)).
Reflexivity:
f(n) = Θ(f(n)),
f(n) = O(f(n)),
f(n) = Ω(f(n))
Symmetry:
f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n)). 

0 comments:

Post a Comment

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Affiliate Network Reviews