Friday, February 4, 2011

...Algorithm...

….Empirical Analysis….

            When we say “empirical”, basically, it means a result obtained from an observation or experiment rather than theory. Masasabi nating ang isang bagay ay ginamitan ng empirical analysis kapag ito ay natamo sa pamamagitan ng paggawa ng experimento, hindi mula sa mga haka-haka o sabi-sabi lamang.
As I explore internet sites, napag-alaman kong may mga hakbang para sa Empirical Analysis tulad ng mga sumusunod:
1. Choose on the essential process.
2. Think and choose your input sample (range, size,...).
3. Convert the algorithm to a program.
4. Create a sample of inputs.
5. Execute the program; obserbahan ang mga datos at irekord..
6. Evaluate the data.



To establish the quantity of resources, such as time and storage, which is necessary in executing an algorithm is what we call analysis of algorithm. Halos lahat ng algorithms ay denisenyo to work with keys of arbitrary length. Frequently, the competence or running time of an algorithm is affirmed as a function relating the input length to the amount of steps (time complexity) or storage places (space complexity).
Ang analysis of algorithm ay isang napakahalagang bahagi ng malawakang computational complexity theory, na nagbibigay ng theoretical estimates para sa mga kinakailangang resources ng kahit anong algorithm which solves a given computational problem. These estimates provide an approach into sensible directions of search for well-organized algorithms.
(Wikipedia.com)


Generally, analyzing an algorithm is basically studying the arrangement of the algorithm and drawing conclusions about how the execution of that algorithm, which is the program, will carry out.
            According t Bruno R. Preiss, as we do analysis of algorithm, we can:
  • determine the running time of a program as a function of its inputs;
  • determine the total or maximum memory space needed for program data;
  • determine the total size of the program code;
  • determine whether the program correctly computes the desired result;
  • determine the complexity of the program--e.g., how easy is it to read, understand, and modify;
  • determine the robustness of the program--e.g., how well does it deal with unexpected or erroneous inputs?

….Big – Oh Notation….

Ang Big-Oh (the "O" means "order of") notation is primarily anxious with what happens for very large values of N, therefore only the largest term in a polynomial is needed. All smaller terms are dropped.
According to Swartz, Big- Oh notation is not always useful, having the following reasons:
  • Too hard to analyze. Many algorithms are simply too hard to analyze mathematically.
  • Average case unknown. There may not be sufficient information to know what the most important "average" case really is, therefore analysis is impossible.
  • Unknown constant. Both walking and traveling at the speed of light have a time-as-function-of-distance big-oh complexity of O(N). Although they have the same big-oh characteristics, one is rather faster than the other. Big-oh analysis only tells you how it grows with the size of the problem, not how efficient it is.
  • Small data sets. If there are no large amounts of data, algorithm efficiency may not be important.
(Fred Swartz)

This notation is useful for us to be able to know the estimated memory allocation of an algorithm will require in CPU or memory resources.
Ang Big-Oh notation ay naglalarawan sa limitadong gawa ng isang function when the disagreement be inclined towards a particular value or infinity, usually in terms of simpler functions. Ang growth rates ang batayan ng Big O notation malaman nya ang mga functions: ang magkaibang function na may parehong growth rate ay pwedeng irepresenta ng iisang O notation.
Kahit nadeveloped ito bilang bahagi ng purong matematika, this notation is now habitually also used in the analysis of algorithms to illustrate an algorithm's practice of computational resources: the nastiest case or regular case running time or memory usage of an algorithm is frequently uttered as a function of the length of its input using big O notation. This allows algorithm designers to guess the behavior of their algorithms and to decide which of multiple algorithms to use, in a way that is independent of computer architecture or clock rate. Because big O notation rejects multiplicative constants on the running time, and disregard effectiveness for low input sizes, it does not always disclose the fastest algorithm in practice or for practically-sized data sets, but the approach is still very effective for comparing the scalability of various algorithms as input sizes become large
(Wikipedia.com)


Formula:
f(n) = O(g(n))
if f(n) grows with same rate or slower than g(n).
f(n) = Θ(g(n))  or f(n) = o(g(n))
Example:
n+5  = Θ(n)  = O(n) = O(n2)
        = O(n3) = O(n5)
the closest estimation: n+5 = Θ(n)
the general practice is to use the Big-Oh notation:
            n+5 = O(n)
*      Rules to manipulate Big-Oh expressions
Rule 1:
a. If T1(N) = O(f(N)) and T2(N) = O(g(N))
then   T1(N) + T2(N) = max( O( f (N) ), O( g(N) ) ) 

b. If T1(N) = O( f(N) ) and T2(N) = O( g(N) )
then T1(N) * T2(N) = O( f(N)* g(N) )
Rule 2:
If T(N) is a polynomial of degree k,
then T(N) = Θ( Nk )
Rule 3:
log k N = O(N) for any constant k.

Examples:
n2   + n = O(n2)                                   
we disregard any lower-order term
nlog(n) = O(nlog(n))
n2   + nlog(n) = O(n2)

No comments:

Post a Comment