design analysis and analysis

42
MC9223/Design and Analysis of Algorithms/Unit I As an example of illustrating the notion of algorithm, we consider three methods for solving the same problem: Computing th greatest common divisor. Greatest common divisor of two non negative, not- both-zero integers m and n, denoted by gcd(m,n), is defined as the largest integer that divides both m and n evenly. First try -- Euclid’s algorithm : gcd(m, n) = gcd(n, m mod n) Euclid’s algorithm is a famous method which applies repeated equality. gcd(m,n) = gcd(n,m mod n) For example: gcd(60,24) = gdc(24,12)=gcd(12,0)=12. gcd(m, n): the largest integer that divides both m and n. First try -- Euclid’s algorithm: gcd(m, n) = gcd(n, m mod n) Step1: If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2. Step2: Divide m by n and assign the value of the remainder to r. Step 3: Assign the value of n to m and the value of r to n. Go to Step 1. Algorithms An algorithm is a sequence of unambiguous instructions for solving a computational problem, i.e., for obtaining a required output for any legitimate input in a finite amount of time. “computer” Proble mmmee mmnmn mmmmm algorit hmm input output 1

Upload: yuvaraj-cool

Post on 21-Apr-2015

71 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

As an example of illustrating the notion of algorithm, we consider three methods for solving the same problem: Computing th greatest common divisor.

Greatest common divisor of two non negative, not- both-zero integers m and n, denoted by gcd(m,n), is defined as the largest integer that divides both m and n evenly.First try -- Euclid’s algorithm : gcd(m, n) = gcd(n, m mod n)

Euclid’s algorithm is a famous method which applies repeated equality.

gcd(m,n) = gcd(n,m mod n)

For example: gcd(60,24) = gdc(24,12)=gcd(12,0)=12. gcd(m, n): the largest integer that divides both m and n. First try -- Euclid’s algorithm: gcd(m, n) = gcd(n, m mod n)

Step1: If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2.Step2: Divide m by n and assign the value of the remainder to r.Step 3: Assign the value of n to m and the value of r to n. Go to Step

1.

Pseudocode of Euclid’s Algorithm:

Algorithm Euclid(m, n)//Computes gcd(m, n) by Euclid’s algorithm//Input: Two nonnegative, not-both-zero integers m and n //Output: Greatest common divisor of m and n while n ‡ 0 do

r ß m mod n

AlgorithmsAn algorithm is a sequence of unambiguous instructions for solving a computational problem, i.e., for obtaining a required output for any legitimate input in a finite amount of time.

“computer”

Problemmmeemmnmnmmmmmalgorithm

m

input output

1

Page 2: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

m ß nn ß r

return m

Second Try for gcd(m, n)

Consecutive Integer Algorithm

Step1: Assign the value of min{m, n} to t. Step2: Divide m by t. If the remainder of this division is 0,

go to Step3;otherwise, go to Step 4. Step3: Divide n by t. If the remainder of this division is 0,

return the value of t as the answer and stop; otherwise, proceed to Step4.

Step4: Decrease the value of t by 1. Go to Step2.

Third try for gcd(m, n)

Middle-school procedure Step1: Find the prime factors of m. Step2: Find the prime factors of n. Step3: Identify all the common factors in the two prime

expansions found in Step1 and Step2. (If p is a common factor occurring Pm and Pn times in m and n, respectively, it should be repeated in min{Pm, Pn} times.)

Step4: Compute the product of all the common factors and return it as the gcd of the numbers given.

Thus for the numbers 60,24 we get:

60 = 2 . 2 . 3 . 524 = 2 . 2 . 2 . 3gcd(60,24) = 2 . 2 . 3 = 12

What can we learn from the previous 3 examples?

Each step of an algorithm must be unambiguous. The same algorithm can be represented in several different

ways. (different pseudocodes) There might exists more than one algorithm for a certain

problem. Algorithms for the same problem can be based on very different

ideas and can solve the problem with dramatically different speeds.

Fundamentals of Algorithmic Problem Solving

Understanding the problem Reading the problems description carefully, Asking

questions, do a few examples by hand, think about special cases, etc.

2

Page 3: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Deciding on Exact vs. approximate problem solving Appropriate data structure

Design an algorithm Proving correctness Analyzing an algorithm

Time efficiency : how fast the algorithm runs Space efficiency: how much extra memory the algorithm

needs. Simplicity and generality

Coding an algorithm

Understanding the problem:An input to an algorithm specifies an instance of the problem the algorithms solves. It is very important to specify exactly te range of instances the algorithms needs to handle. Failure to do this, the algorithm may work correctly for a majority of inout but crash on some “boundary” value. A correct algorithm is not one that works most of the time but one that works correctly for all legitimate inputs.

Ascertaining the Capabilities of a Computational Device:Once having understood a problem, one must ascertain the capabilities of the computational device the algorithm is intended for. In the case of random-access memory (RAM), its central assumption is that instructions are executed one after another, one operation at a time. Accordingly algorithms are designed to be executed on such machines are called sequential algorithms.Some new computers can execute operations concurrently; i.e. in parallel. Algorithms that take advantage of this capability are called parallel algorithms

Choosing between Exact and Appropriate Problem Solving:The next principal decision is to choose between solving the problem exactly or solving it appropriately. The former case algorithm is called exact algorithm latter case, an algorithm is called appropriate algorithm.

Deciding on Appropriate Data Structure:In the new world of object oriented programming, data structures remain crucially important for both design and analysis of algorithms.Algorithms + Data structures = Programs

Algorithm Design Techniques:What is an algorithm design technique?An algorithm design technique is a general approach to solving problems algorithmically that is applicable to a variety of problems for different areas of computing.

Methods of Specifying an Algorithm:

3

Page 4: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Once having designed an algorithm, you need to specify it in some fashion. Using natural language is obviously difficult.A pseudocode is a mixture of a natural language and programming language like constructs. A pseudocode is usually more precise than an natural language. Computer scientists never have agreed on a single form pf pseudocode.

Proving an Algorithm’s Correctness:

Once an algorithm has been specified, you have to prove its correctness. That is you have to prove that the algorithm yields a required result for every legitimate input in a finite amount of time. For algorithms, a proof of correctness is to use mathematical induction because an algorithm’s iteration provide a natural sequence of steps needed for such proofs.The notion of correctness for approximation algorithms is less straightforward than it is for exact algorithms. Errors for approximation algorithms should not exceed a predefined limit.

Analyzing an Algorithm:

After correctness, the most important is efficiency. There are two kinds of algorithm efficiency:

Time efficiency: indicates how fast the algorithm runs. Space efficiency: indicates how much extra memory the

algorithm needs.

Another characteristic of an algorithm is simplicity. Simplicity is an important algorithm characteristic. Because simple algorithms are easier to understand and easier to program. Resulting programs usually contain fewer bugs. Simper algorithms are also more efficient than more complicated alternatives.

Another characteristic of an algorithm is generality. There are two issues here:

Generality of the problem the algorithm solves – it is easier to design an algorithm for a more general problem. But there are situations where designing a more general algorithm is unnecessary or difficult or even impossible.

The range of inputs it accepts – your main concern should be designing an algorithm that can handle a range of inputs that is natural for the problem at hand.

Coding an Algorithm:Most algorithms are ultimately implemented as computer programs. Transition of algorithm to a program must be done carefully. Validity of

4

Page 5: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

the computer program is established by testing. Testing of computer programs is an art rather than science.

Important Problem Types

Sorting Searching String processing Graph problems Combination problems Geometric problems Numerical problems

Sorting:

Rearrange the items of a given list in ascending order.Input: A sequence of n numbers <a1, a2, …, an>Output: A reordering <a´1, a´2, …, a´n> of the input sequence such that a´1≤ a´2 ≤ … ≤ a´n.Sorting can be done to sort list of numbers, characters from an alphabet, character strings and records maintained by schools about students and libraries about books and companies about their employees.

Why sorting?Help searchingAlgorithms often use sorting as a key subroutine.

Sorting keyA specially chosen piece of information used to guide sorting. I.e., sort student records by names.

Examples of sorting algorithms Selection sort Bubble sort Insertion sort Merge sort Heap sort

Evaluate sorting algorithm complexity: the number of key comparisons. Two properties

Stability: A sorting algorithm is called stable if it preserves the relative order of any two equal elements in its input.

In place : A sorting algorithm is in place if it does not require extra memory, except, possibly for a few memory units.

Searching:

5

Page 6: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

The searching problem deals with finding a given value, called a search key, in a given set.

Examples of searching algorithms Sequential searching Binary searching

String A string is a sequence of characters from an alphabet. Strings comprises of Text strings: letters, numbers, and special characters. Bit strings comprises of zeros and ones. String matching: searching for a given word/pattern in a text.

Graph Problems A graph is a collection of points called vertices, some of which are connected by line segments called edges. Graphs can be used for:

Modeling real-life problems Modeling WWW transportation communication networks Project scheduling …

Examples of widely used graph algorithms:

Graph traversal algorithms Shortest-path algorithms Topological sorting

The widely known graph problems are:

Traveling salesman problem – Which finds the shortest tour through n cities that visits every city exactly once.The graph coloring problem – assigns the smallest number of colors to vertices of a graph so that no two adjacent vertices are the same color.

Combinational Problems:

The traveling saleman problems and the graph coloring problems are examples of combination problems. These problems ask to find a combinatorial object – such as permutation, a combination, or a subset – that satisfies certain constraints and has some desired property. Geometic Problems:

Geometric algorithms deal with geometric objects such as points, lines and polygons. Ancient greeks have invented procedures for solving problems of constructing simple geometric shapes – triangles, circles etc.

6

Page 7: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Numerical Problems:

Numerical problems another large area of application, are problems that involve mathematical objects of continuous nature: solving equations and systems of equations, computing definite integrals, evaluating functions.

FUNDAMENTALS OF THE ANALYSIS OF ALGORITHMS EFFICIENCY

Analysis of algorithms means to investigate an algorithm’s efficiency with respect to resources: running time and memory space.

Time efficiency: how fast an algorithm runs.Space efficiency: the space an algorithm requires.

Analysis Framework

Measuring an input’s size Measuring running time Orders of growth (of the algorithm’s efficiency function) Worst-base, best-case and average efficiency

Measuring an Input’s Size

Most programs runs longer on larger inputs example it takes longer to sort larger arrays, multiply larger matrices.

Efficiency is define as a function of input size. Input size depends on the problem.

Example 1, what is the input size of the problem of sorting n numbers?Example 2, what is the input size of adding two n by n matrices?

Units for Measuring Running Time

Measure the running time using standard unit of time measurements, such as seconds, minutes?

This Depends on the speed of the computer. count the number of times each of an algorithm’s operations is

executed. This approach is Difficult and unnecessary

count the number of times an algorithm’s basic operation is executed.

Basic operation: the most important operation of the algorithm, the operation contributing the most to the total running time. For example, the basic operation is usually the most time-consuming operation in the algorithm’s innermost loop.

7

Page 8: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Input Size and Basic Operation Examples

ProblemInput size measure

Basic operation

Search for a key in a list of n items

Number of items in list, n

Key comparison

Add two n by n matrices

Dimensions of matrices, n

addition

Polynomial Evaluation

Order of the polynomial

multiplication

Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size.

Here is an important application. Let cop - Be the time of execution of an algorithms basic operation on a particular computer Let C(n) – be the number of times this operation needs to be executed for this algorithm. Then we can estimate the running time T(n) of a program implementing this algorithm on that computer by the formula.

T(n) ≈ copC (n)Order of growth:

Why do we care about the order of growth of an algorithm’s efficiency function, i.e., the total number of basic operations?Example GCD of two small numbers is much more efficient Euclids algorithm compared to other two algorithms.

GCD( 60, 24): Euclid’s algorithm? Consecutive Integer Counting?

GCD( 31415, 14142): Euclid’s algorithm? Consecutive Integer Counting?

We care about how fast the efficiency function grows as n gets greater.

Worst-Case, Best-Case, and Average-Case Efficiency

Algorithm efficiency depends on the input size n but for some algorithms efficiency depends on not only on the input size also on type of input.

Consider an example: Sequential Search

Problem: Given a list of n elements and a search key K, find an element equal to K, if any.

8

Page 9: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Algorithm: Scan the list and compare its successive elements with K until either a matching element is found (successful search) of the list is exhausted (unsuccessful search)

Sequential Search Algorithm

ALGORITHM SequentialSearch(A[0..n-1], K)//Searches for a given value in a given array by sequential search//Input: An array A[0..n-1] and a search key K//Output: Returns the index of the first element of A that matches K or –1 if there are no matching elementsi ß0while i < n and A[i] ‡ K do

i ß i + 1if i < n //A[I] = K

return ielse

return -1Worst case Efficiency

The worst case efficiency of an algorithm is its Efficiency (# of times the basic operation will be executed) for the worst case input of size n. For which the algorithm runs the longest among all possible inputs of size n.We analyze the algorithm to see what kind of inputs yield largest value of the basic operations count C(n) among the possible inputs of size n and compute the worst case value Cworst(n).

Best case Efficiency

The best case efficiency of an algorithm is its Efficiency (# of times the basic operation will be executed) for the best case input of size n. For which the algorithm runs the fastest among all possible inputs of size n.Then we should ascertain the value of C(n) on these inputs. Example for sequential search, best case inputs will be lists of size n with their first elements equal to a search key according to Cbest(n) =1.

Average case EfficiencyThe average case efficiency of an algorithm is its Efficiency (#of times the basic operation will be executed) for a typical/random input of size n. NOT the average of worst and best case . How to find the average case efficiency?Consider the sequential search. The standard assumptions are:

a) the probability of a successful search is equal to p (0<=p<=1)b) the probability of the first match occuririn in the ith position of

the list is the same for every i.We can find the average number of key comparisons Cavg(n) as follows:The first match occurring in the ith position of the list is p/n for every i, and the number of comparisons made by the algorithm is i. In the case

9

Page 10: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

of an unsuccessful search, the number of comparisons is n with the probability of such a search being (1-p).

Cavg(n) = [1.p/n + 2.p/n + . . . + i.p/n + . . . + n.p/n] + n. (1-p)

= p/n [ 1+2+. . . + I + . . . + n] + n (1-p)

= p/n . n(n+1)/2 + n(1-p)

= p(n+1)/2 + n(1-p)

Summary of the Analysis Framework

Both time and space efficiencies are measured as functions of input size.

Time efficiency is measured by counting the number of basic operations executed in the algorithm. The space efficiency is measured by the number of extra memory units consumed.

The framework’s primary interest lies in the order of growth of the algorithm’s running time (space) as its input size goes infinity.

The efficiencies of some algorithms may differ significantly for inputs of the same size. For these algorithms, we need to distinguish between the worst-case, best-case and average case efficiencies.

Asymptotic NotationsThree notations used to compare orders of growth of an algorithm’s basic operation count for the algorithms efficiency.

O (big Oh) Ω (big omega) Θ (big theta)

O(g(n)): class of functions f(n) that grow no faster than g(n)

Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)

Θ (g(n)): class of functions f(n) that grow at same rate as g(n)

Where,t(n) – algorithms running timeC(n) – indicates the basic operation countg(n) – simple functions to compare the count with.

O-notation

Formal definition

10

Page 11: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

A function t(n) is said to be in O(g(n)), denoted t(n) ÎO(g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that

t(n) £ cg(n) for all n ³ n0

Example: 100n + 5 Î O(n2) 100n + 5 £ 100 n + n (for all n ³5) = 101n Î 101 n2

Thus the value of the constants c and n0

100n + 5£100n + 5n (for all n>1) = 105nTo complete the proof with c = 105 and n0= 1

W -notation

Formal definitionA function t(n) is said to be in W(g(n)), denoted t(n) Î W(g(n)), if t(n) is bounded below by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that

t(n) ³ cg(n) for all n ³ n0

11

Page 12: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Example of the formal proof that n3 Î W(n2):

n3 ³ n2 for n³ 0.i.e. we can select c=1 and n0 = 0

Q -notation

Formal definitionA function t(n) is said to be in Q(g(n)), denoted t(n) Î Q(g(n)), if t(n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constant c1 and c2 and some nonnegative integer n0 such that

c2 g(n) £ t(n) £ c1 g(n) for all n ³ n0

Example let us prove that 1/2n(n-1) Î Q(n2) First we prove the right inequality (upper bound):

1/2n(n-1) = 1/2n2 - 1/2n £ 1/2n2 for all n ³ 0.

Second we prove the left inequality (lower bound):

1/2n(n-1) = 1/2n2 - 1/2n ³ 1/2n2 - 1/2n 1/2n for all n ³ 2 = 1/4n2

Hence we can select c2 = ¼, c1= ½ and no = 2

MATHEMATICAL ANALYSIS OF NON RECURSIVE ALGORITHMS

This is used to analyse the efficiency of non recursive algorithms.

Example 1 Consider the problem of finding the value of the largest element in a list of n numbers. Assume the list is implemented as array. The following is a pseudocode of a standard algorithm for solving the problem.

12

Page 13: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Algorithm MaxElement (A[0..n-1])//Determines the value of the largest element in a given array//Input : An array A[0..n-1] of real numbers//Output: The value of the largest element in AMaxval ß A[0]for i ß 1 to n – 1 doif A[i] > maxval

Maxval ßA[i]return maxval

Inout Size here is the array n. The operation to be executed most often is for loop. There are two operations in the loops body:

Comparison A[i]>maxval Assignment maxval ßA[i]

Which of these two operations must we consider basic operation?Comparison is executed on each repetition of the loop and not assignment

Let us denote C(n) the number of lines this comparison is executed. The algorithm makes one comparison on each execution of the loop, which is repeated for each value of the loop’s variable i within the bounds between 1 and n-1. Therefore we get the following sum for C(n):

n-1

C(n) = Σ 1 = n-1 Î Q(n).i=1

The General plan to follow in analyzing non recursive algorithms is:1. Decide on a parameter indicating an input’s size.2. Identify the algorithm’s basic operation.3. Check whether the number of times the basic operation is

executed depends only in the size of an input. If it depends on some additional properties the worst case, average case, best case efficiencies have to be investigated.

4. Set up sum expressing the number of times the algorithm’s basic operation is executed.

5. Using standard formulas and rules of sum manipulation, either find a closed form formula for the count or , at the very least, establish its order of growth.

Example 2:The following algorithm finds the number of binary digits in the binary representation of a positive decimal integer.

ALGORITHM Binary(n)//Input: A positive decimal integer n//Output: The number of binary digits in n’s binary representationcount ß 1while n>1 docount ß count +1nß[n/2]return count

13

Page 14: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

In this the most frequently executed operation is not inside the while loop but rather the comparison n>1 that determines whether the loop’s body will be executed. Since the number if times the comparison will be executed is larger than the number of repetitions of the loop’s body by exactly 1, the choice is not that important.

A more significant feature of this example is the fact that the loop’s variable takes on only a few values between its lower and upper limits; therefore we have to use an alternative way of computing the number of times the loop is executed. Since the value of n is about halved on each repetition of the loop, the answer would be log2n.

MATHEMATICAL ANALYSIS OF RECURSIVE ALGORITHMS

Recursive evaluation of n! Recursive solution to the Towers of Hanoi puzzle Recursive solution to the number of binary digits problem

Example Recursive evaluation of n ! (1)

Iterative Definition Recursive definition

Algorithm F(n)// Computes n! recursively// Input: A nonnegative integer n//Output: the value of n!if n=0

return 1 //base caseelse

return F (n -1) * n //general case

Example Recursive evaluation of n ! (2)

Two Recurrences The one for the factorial function value: F(n)

F(n) = F(n – 1) * n for every n > 0F(0) = 1

The one for number of multiplications to compute n!, M(n)

M(n) = M(n – 1) + 1 for every n > 0M(0) = 0

M(n) ∈ Θ (n)

Steps in Mathematical Analysis of Recursive Algorithms

14

Page 15: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Decide on parameter n indicating input size

Identify algorithm’s basic operation

Determine worst, average, and best case for input of size n

Set up a recurrence relation and initial condition(s) for C(n)-the number of times the basic operation will be executed for an input of size n (alternatively count recursive calls).

Solve the recurrence or estimate the order of magnitude of the solutions

Example : The Towers of Hanoi Puzzle

In this puzzle we have n disks of different sizes and three pegs. Initially all the disks are in the first peg in order of sixw, the largest on the bottom and the smallest on top. The goal is to move all the disks to the third peg, using the second one as an auxiliary, if necessary. We can move only one disk at a time, an it is forbidden to place a larger disk on top of the smaller one.

To move n>1 disks from peg 1 to peg 3(with peg 2 as auxiliary), we first moce recursively n – 1 disks from peg 1 to peg 2 (with peg 3 as auxiliary), then move the largest disk directly from peg 1 to peg 3, and finally move recursively n – 1 disks from peg 2 to pe 3(using peg1 as auxiliary). If n=1 then we can directly move the disk directly from the source peg to the destination peg.

The number of moves M(n) depends on n only, and we get the following recurrence equation for it:

M(n) = M(n – 1) + 1+M(n - 1) for n>1.

With the obvious initial condition M(1) = 1, we have the following recurrence relation for the number of moves M(n):

M(n) = 2M(n – 1) + 1 for every n > 1

M(1) = 1

We solve this recurrence by te same method of backward substitutions:

M(n) = 2M(n – 1) + 1 sub. M(n – 1) = 2M(n – 2) + 1=2[2M(n – 2) + 1] + 1 = 22M(n – 2)+2+1 sub. M(n – 2) = 2M(n – 3) + 1

=22 [2M(n – 3)+1]+2+1 = 23 M(n – 3)+ 22+2+1 The next one will be

15

Page 16: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

24 M(n – 4)+ 23+ 22+2+1

And after the ith substitution, we get

M(n) = 2i M(n – i)+ 2i-1+ 2i-2+…+2+1 = 2i M(n – i)+ 2i-1

Since the initial condition is specified for n=1, which is achieved for i=n-1, we get the following formula for the solution to recurrence

M(n)= 2 n-1 M(n – (n -1)) + 2 n-1 -1

= 2 n-1 M(1) + 2 n-1 -1 = 2 n-1 + 2 n-1 -1 = 2 n -1

Divide and Conquer Method

Divide and Conquer is the best known general algorithm design technique.

Three Steps of The Divide and Conquer Approach

The most well known algorithm design strategy:1. Divide the problem into two or more smaller subproblems.

2. Conquer the subproblems by solving them recursively.

3. Combine the solutions to the subproblems into the solutions for the original problem.

The divide-and-comquer technique is shown in the figure, which depicts the case of dividing a problem into two smaller subproblems.

16

Page 17: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

An Example: Calculating a0 + a1 + … + an-1ALGORITHM RecursiveSum(A[0..n-1])//Input: An array A[0..n-1] of orderable elements//Output: the summation of the array elementsif n > 1

return ( RecursiveSum(A[0.. n/2 – 1]) + RecursiveSum(A[n/2 .. n– 1]) )

Divide and Conquer Examples

Sorting: mergesort and quicksort Tree traversals Binary search Matrix multiplication-Strassen’s algorithm

Merge sort is an example of divide-and-conquer technique. It sorts a given array A[0..n-1] by dividing it into two halves A[0.[n/2] – 1] and A[[n/2]..n -1], sorting each of them recursively, and then merging the two smaller stored arrays into a single sorted one.

The merging of two sorted arrays can be done as. Two pointers array indices are initialzed to point to the first elements of the arrays being merged. Then the elements pointed to are compared and the smaller of them is added to a new array being constructed; after that, the index of that smaller element is incremented to point to its immediate successor in the array it was copied from. This operation is continued until one of the two given arrays is exhausted. And the remaining elements of the other array are copied to the end of the new array.

ALGORITHM Mergesort(A[0..n-1])//Sorts array A[0..n-1] by recursive mergesort

A Typical Divide and Conquer Case

subproblem 2 of size n/2

subproblem 1 of size n/2

a solution to subproblem 1

a solution tothe original problem

a solution to subproblem 2

a problem of size n

17

Page 18: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

//Input: An array A[0..n-1] of orderable elements//Output: Array A[0..n-1] sorted in nondecreasing orderif n > 1

//dividecopy A[0.. n/2 – 1] to B[0.. n/2 – 1]copy A[ n/2 .. n – 1] to C[0.. n/2 – 1]//conquerMergesort(B[0.. n/2 – 1)Mergesort(C[0.. n/2 – 1)//combineMerge(B, C, A)

ALGORITHM Merge(B[0..p-1], C[0..q-1], A[0..p+q-1])//Merges two sorted arrays into one sorted array//Input: Arrays B[0..p-1] and C[0..q-1] both sorted//Output: Sorted Array A[0..p+q-1] of the elements of B and Ci ß 0; j ß 0; k ß 0while i < p and j < q do //while both B and C are not exhausted

if B[i] <= C[j] //put the smaller element into AA[k] ßB[i]; i ß i + 1

else A[k] ßC[j]; j ß j + 1k ß k + 1

if i = p //if list B is exhausted firstcopy C[j..q-1] to A[k..p+q-1]

else //list C is exhausted firstcopy B[i..p-1] to A[k..p+q-1]

Example of merge sort operation:

Efficiency of merge sort:

C(n) = 2C(n/2) + Cmerge(n) for n>1, C(1) =0.

Cmerge (n) the number of key comparisons performed during the merging stage. At each step, exactly one comparison comparison is made, after which the total number if elements in the two arrays still

18

Page 19: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

needed to be processed is reduced by one. In the worst case, neither of the two arrays becomes empty before the other one contains just one element. Therefore the worst case, Cmerge (n) = n-1

Cworst (n) = 2Cworst (n/2) + n-1 for n>1, Cworst(1)=0.

Cworst (n)=nlog2n-n+1

The Divide, Conquer and Combine Steps in Quicksort

Divide: Partition array A[l..r] into 2 subarrays, A[l..s-1] and A[s+1..r] such that each element of the first array is ≤A[s] and each element of the second array is ≥ A[s]. (computing the index of s is part of partition.)

Implication: A[s] will be in its final position in the sorted array.

Conquer: Sort the two subarrays A[l..s-1] and A[s+1..r] by recursive calls to quicksort

Combine: No work is needed, because A[s] is already in its correct place after the partition is done, and the two subarrays have been sorted.

ALGORITHM Quicksort(A[l..r])//Sorts a subarray by quicksort//Input: A subarray A[l..r] of A[0..n-1],defined by its left and right indices l and r//Output: The subarray A[l..r] sorted in nondecreasing orderif l < r

s ß Partition (A[l..r]) // s is a split positionQuicksort(A[l..s-1])Quicksort(A[s+1..r]

Quicksort

1. Select a pivot w.r.t. whose value we are going to divide the sublist. (e.g., p = A[l]). Use the simple strategy of selecting the subarray’s first element p=A[l].

2. Rearrange the list so that it starts with the pivot followed by a ≤ sublist (a sublist whose elements are all smaller than or equal to the pivot) and a ≥ sublist (a sublist whose elements are all greater than or equal to the pivot ) . Rearranging can be done by different procedures: scan from left-to-right and right-to-left.

3. Three situations may arise depending upon the scanning indices have crossed:

p

A[i]≤p A[i] ≥p

19

Page 20: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

a. If scanning indices I and j have not corssed, i< j we simply exchange A[i] and A[j] and resumes the scan by incrementing I and decrementing j respectively:

i j

b. If the scanning indices have crossed over , i> j we have partitioned the array after exchanging the pivot with A[j]:

c. Finally if the scanning indices stop while pointing to the same element, i=j, the value they are pointing to must be equal to p. Thus we have partitioned the array:

ALGORITHM Partition (A[l ..r])//Partitions a subarray by using its first element as a pivot//Input: A subarray A[l..r] of A[0..n-1], defined by its left and right indices l and r (l < r)//Output: A partition of A[l..r], with the split position returned as this function’s valueP ßA[l]i ßl; j ß r + 1;Repeat

repeat i ß i + 1 until A[i]>=p //left-right scanrepeat j ßj – 1 until A[j] <= p//right-left scanif (i < j) //need to continue with the scan

swap(A[i], a[j])until i >= j //no need to scanswap(A[l], A[j])return j

Efficiency of QuicksortBased on whether the partitioning is balanced.

Best case : split in the middle — Θ( n log n) C(n) = 2C(n/2) + Θ(n) //2 subproblems of size n/2 each

Worst case : sorted array! — Θ( n2) C(n) = C(n-1) + n+1 //2 subproblems of size 0 and n-1

respectively Average case : random arrays — Θ( n log n)

Or

Efficiency of Quick sort:

Cbest(n)=2 Cbest (n/2)+n for n>1, Cbest (1)=0fs

20

p all are ≤ p ≥ p . . . ≤ p all are ≥ p

Page 21: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Cworst(n)=(n+1)+n+…+3=(n+1)(n+2)/2 -3 εΘ (n2)

n-1Cavg(n)=1/n Σ[(n+1) + Cavg (s) +Cavg (n-1-s)] for n>1

S=0

Cavg (0), Cavg (1)=0

Cavg (n) ≈2n ln n≈1.38 n log2n.

Binary search

The binary search algorithm can only be applied if the data are sorted. You can exploit the knowledge that they are sorted to speed up the search.

The idea is analogous to the way people look up an entry in a dictionary or telephone book. You don't start at page 1 and read every entry! Instead, you turn to a page somewhere about where you expect the item to be. If you are lucky you find the item straight away. If not, you know which part of the book will contain the item (if it is there), and repeat the process with just that part of the book.

If you always split the data in half and check the middle item, you halve the number of remaining items to check each time. This is much better than linear search, where each unsuccessful comparison eliminates just one item.

It works by comparing a search key K with the array’s middle element A[m]. If they match, the algorithm stops, otherwise the same operation is repeated recursively for the first half of the array if K < A[m] and for the second half if K > A[m]:

A[0] . . . A[m-1] A[m] A[m+1] . . . A[n-1].K<A[m] K K>A[m]

ALGORITHM BinarySearch(A[0..n-1], K)//Implements nonrecursive binary search//Input: An array A[0..n-1] sorted in ascending order and // a search key K//Output: An index of the array’s element that is equal to K// or –1 if there is no such elementl ß 0, r ß n – 1while l £ r do //l and r crosses overà can’t find K.

m ßë(l + r) / 2ûif K = A[m] return m //the key is foundelse if K < A[m] rß m – 1//the key is on the left half of the array

21

Page 22: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

else l ß m + 1// the key is on the right half of the arrayreturn -1

Binary search - Example

Seek the value 123

Binary Tree Traversals

DefinitionsA binary tree T is defined as a finite set of nodes that is either empty or consists of a root and two disjoint binary trees TL and TR called, respectively, the left and right subtree of the root.

ExampleThe height of a tree is defined as the length of the longest path from the root to a leaf.

Problem: find the height of a binary tree.

22

Page 23: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Find the Height of a Binary TreeALGORITHM Height(T)//Computes recursively the height of a binary tree//Input: A binary tree T//Output: The height of Tif T = Æ

return –1else

return max{Height(TL), Height(TR)} + 1

Efficiency

C(n) = n + x =2n + 1

In the preorder traversal, the root is visited before the left and right subtrees are visited

In the inorder traversal, the root is visited after visiting its left subtree but before visiting the right subtree

In the postorder traversal the root is visited after visiting the left and right subtrees.

Strassen’s Matrix Multiplication

Brute-force algorithmc00 c01 a00 a01 b00 b01 = *c10 c11 a10 a11 b10 b11

a00 * b00 + a01 * b10 a00 * b01 + a01 * b11 = a10 * b00 + a11 * b10 a10 * b01 + a11 * b11 8 multiplications4 additionsEfficiency class: Q (n3)

Strassen’s Algorithm (two 2x2 matrices)

c00 c01 a00 a01 b00 b01 = *c10 c11 a10 a11 b10 b11

m1 + m4 - m5 + m7 m3 + m5 = m2 + m4 m1 + m3 - m2 + m6

m1 = (a00 + a11) * (b00 + b11)m2 = (a10 + a11) * b00m3 = a00 * (b01 - b11)m4 = a11 * (b10 - b00)

23

Page 24: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

m5 = (a00 + a01) * b11m6 = (a10 - a00) * (b00 + b01)m7 = (a01 - a11) * (b10 + b11)

7 multiplications18 additionsLet A and B be two nxn matrices, where n is a power of two (What about the situations where n is not a power of two? Matrices can be padded with rows and columns of zeros. We can divide A,B and their product C into four n/2 by n/2 submatrices each as follows:

C00 C01 A00 A01 B00 B01 = *C10 C11 A10 A11 B10 B11

C00 can be computed either as A00 * B00 + A01 * B10 or M1 + M4 - M5 + M7

M1 + M4 - M5 + M7 M3 + M5 = M2 + M4 M1 + M3 - M2 + M6 Submatrices:

M1 = (A00 + A11) * (B00 + B11)

M2 = (A10 + A11) * B00

M3 = A00 * (B01 - B11)

M4 = A11 * (B10 - B00)

M5 = (A00 + A01) * B11

M6 = (A10 - A00) * (B00 + B01)

M7 = (A01 - A11) * (B10 + B11)

The seven products of n/2 by n/2 matrices are computed. Let us evaluate the asymptotic efficiency of this algorithm. If M(n) is the number of multiplications made by Strassen’s algorithm in multiplying two nxn matrices where n is a power of 2, we get the following recurrence relation for it:

M(n) = 7M(n/2) for n>1, M(1)=1. A binomial coefficient, denoted C(n, k), is the number of

combinations of k elements from an n-element set (0 ≤ k ≤ n). Recurrence relation (a problem à 2 overlapping problems)

C(n, k) = C(n-1, k-1) + C(n-1, k), for n > k > 0, and C(n, 0) = C(n, n) = 1

Dynamic programming solution:

24

Page 25: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Record the values of the binomial coefficients in a table of n+1 rows and k+1 columns, numbered from 0 to n and 0 to k respectively.

11 11 2 11 3 3 11 4 6 4 11 5 10 10 5 1

Greedy Algorithms

Change Making Problem

How to make 48 cents of change using coins of denominations of 25, 10, 5, and 1 so that the total number of coins is the smallest?The idea:

make the locally best choice at each step.Is the solution optimal?A greedy algorithm makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution. The choice made at each step must be:

FeasibleSatisfy the problem’s constraints

locally optimalBe the best local choice among all feasible choices

IrrevocableOnce made, the choice can’t be changed on subsequent steps.

Applications of the Greedy Strategy

Optimal solutions: change making Minimum Spanning Tree (MST) Single-source shortest paths Huffman codes

Approximations: Traveling Salesman Problem (TSP) Knapsack problem other optimization problems

Minimum Spanning Tree (MST)

25

Page 26: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Spanning tree of a connected graph G: a connected acyclic subgraph (tree) of G that includes all of G’s vertices.

Minimum Spanning Tree of a weighted, connected graph G: is a spanning tree of the smallest weight, where the weight of a tree is defined as the sum of the weights on all its edges.

Graph and its spanning trees; T1 is the minimum spanning tree

1 1 1 1

5 2 2 5 2

3 3 3

Graph w(T1)=6 w(T2)=9 w(T3)=8

The prim’s algorithm constructs a minimum spanning tree through a sequence of expanding subtrees. The initial subtree in such a sequence consists of a single vertex selected arbitrarily from the set V of the graph’s vertices. On each iteration, we expand the current tree in the greedy manner by simply attaching to it the nearest vertex not in that tree. The algorithm stops after all the graph’s vertices have been included in the tree being constructed.

ALGORITHM Prim(G)

// Prim’s algorithm for constructing a minimum spanning tree//Input: A weighted connected graph G =(V,E)//Output:ET, the set of edges composing a minimum spanning

tree of GVT ¬ {v0}ET ¬ ÆFor i ¬ 1 to V -1 do

Find a minimum weight edge e* = (v*,u*) among all the edges (v,u)

Such that v is in VT an u is in V- VT

VT¬ VT U {u*}ET¬ ET U {e*}

Return ET

26

a a

d

b

c d

b

c d

b

c

a

d

b

c

a

Page 27: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Each vertex consists of information about the shortest edge connecting the vertex to a tree vertex. Such information can be provided by attaching two labels to a vertex.: the name of the nearest vertex and the length of the corresponding edge. Vertices not adjacent to any of the tree vertices can be given the ∞ label indicating their “infinite” distance to the tree vertices and a null label for the name of the nearest tree vertex.Vertices can be split into two sets:

Fringe: contains only the vertices that are not in the tree but are adjacent to atleast one tree vertex. These are candidates from which the next tree vertex is selected.

Unseen: are all the other vertices of the graph, because they are yet to be affected by the algorithm.

With such labels, finding the next vertex to be added to the current tree T=( VT, E T) becomes a simple task of finding a vertex with the smallest distance label in the set V- VT

After we have identified a vertex u* to be added to the tree we need to perform two operation:

Move u* from the set V – VT to the set of tree vertices VT

For each remaining vertex u in that is connected to u* by a shorter edge than the u’s current distance label, update its label by u* and the weight of the edge between u* and u respectively.

Start with a tree , T0 ,consisting of one vertex

“Grow” tree one vertex/edge at a time Construct a series of expanding subtrees T1, T2, … Tn-1. .At each

stage construct Ti+1 from Ti by adding the minimum weight edge connecting a vertex in

tree (Ti) to one not yet in tree choose from “fringe” edges (this is the “greedy” step!)

Or (another way to understand it) expanding each tree (Ti) in a greedy manner by attaching

to it the nearest vertex not in that tree. (a vertex not in the tree connected to a vertex in the tree by an edge of the smallest weight)

Algorithm stops when all vertices are included

Need priority queue for locating the nearest vertex Use unordered array to store the priority queue:

Efficiency: Θ(n2)

use min-heap to store the priority queue

Efficiency: For graph with n vertices and m edges: (n + m) logn

27

Page 28: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Key decreases/deletion from min-heapnumber of stages(min-heap deletions)

number of edges considered(min-heap key decreases)

O(m log n)

Kruskal’s Algorithm

There is another greedy algorithm for the minimum spanning tree problem, that also yields an optimal solution. It is named Kruskal’s algorithm named after Joseph Kruskal, who discovered the algorithm when he was a second year graduate student. Kruskal’s algorithm looks at a minimum spanning tree for a weighted connected graph G=(V,E) as an acyclic subgraph with |V| - 1 edges for which the sum of the edge weights is the smallest.

Edges are initially sorted by increasing weight/non decreasing order of their weights

Start with an empty forest/subgraph “grow” MST one edge at a time - intermediate stages usually

have forest of trees (not connected) at each stage add minimum weight edge among those not yet

used that does not create a cycle o at each stage the edge may:

expand an existing tree combine two existing trees into a single tree create a new tree

o need efficient way of detecting/avoiding cycles algorithm stops when all vertices are included

ALGORITHM Kruskal(G)//Input: A weighted connected graph G = <V, E> //Output: ET, the set of edges composing a minimum spanning tree of G.

Sort E in nondecreasing order of the edge weights w(ei1) <= … <= w(ei|E|) ET ß Æ; ecounter ß 0 //initialize the set of tree edges and its size

k ß 0 while encounter < |V| - 1 do

k ß k + 1if ET U {eik} is acyclic

ET ß ET U {eik} ; ecounter ß ecounter + 1

28

Page 29: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

return ET

Application of Kruskal’s algorithm1

3 4 4 6

5 5

26 8

Tree edges Sorted list of edges

bc ef ab bf cf af df ae cd de1 2 3 4 4 5 5 6 6 8

bc bc ef ab bf cf af df ae cd de 1 1 2 3 4 4 5 5 6 6 8

ef bc ef ab bf cf af df ae cd de2 1 2 3 4 4 5 5 6 6 8

ab bc ef ab bf cf af df ae cd de3 1 2 3 4 4 5 5 6 6 8

bf bc ef ab bf cf af df ae cd de4 1 2 3 4 4 5 5 6 6 8

df 5

Selected edges are shown in bold.

Dijkstra’s Algorithm For a given vertex called the source in a weighted connected graph, find shortest paths to all its other vertices. In this single-source shortest paths problem asks for a family of paths, each leading from the source to a different vertex in the graph, thought some paths may have edges in common. The best known algorithm for the single source shortest paths problem is called Dijkstra’s algorithm.

Shortest Path Problems: All pair shortest paths (Floyd’s algorithm)

29

b

e

da f

c

Page 30: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Single Source Shortest Paths Problem (Dijkstra’s algorithm): Given a weighted graph G, find the shortest paths from a source vertex s to each of the other vertices.

Dijkstra’s algorithm finds shortest paths to a graph’s vertices in order of their distance from a given source. First, it finds the shortest path from the source to a vertex nearest to it, then to a second nearest, and so on. Before its ith iteration commences, the algorithm already identified the shortest paths to i-1 other vertices nearest to the source. These vertices, the source and the edges of the shortest paths leading to then from the source from a subtree Ti of the given graph. Since all the edge weights are nonnegative, the next vertex nearest to the source can be found among the vertices adjacent to the vertices of Ti. The set of vertices adjacent to the vertices in Ti can be referred to as “fringe vertices” they are the candidates from which Dijkstra’s algorithm selects the next vertex nearest to the source.To identify the ith nearest vertex, the algorithm computes, for every fringe vertex u, the sum of the distance to the nearest tree vertex v and the length dv of the shortest path from the source to v and then selects the vertex with the smallest such sum.

To facilitate the algorithm’s operation, we label each vertex with two labels. The numeric label d indicates the length of the shortest path from the source to this vertex found by the algorithm so far; when a vertex is added to the tree, d indicates the name of the next-to-last vertex on such a path, i.e. the parent of the vertex in the tree being constructed. With such labelling finding the next nearest vertex u* becomes a simple task of finding a fringe vertex with the smallest d value.

After identifying the vertex u* to be added to the three, we need to perform two operations:

Move u* from the fringe to the set of tree vertices. For each remaining fringe vertex u that is connected to u* by an

edge of weight w(u*,u) such that

du*+ w(u*, u) < du update the labels of u by u* and du*+ w(u*, u), respectively.

ALGORITHM Dijkstra(G, s)//Input: A weighted connected graph G = <V, E>and a source vertex s//Output: The length dvof a shortest path from s to v and its penultimate vertex pv for every vertex vin VInitialize (Q) //initialize vertex priority in the priority queuefor every vertex v in V dodv ∞; Pv null// Pv, the parent of v∞insert(Q, v, dv) //initialize vertex priority in the priority queue

30

Page 31: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

ds 0; Decrease(Q, s, ds) //update priority of s with ds, making ds, the minimum

VT Øfor i 0 to |V| -1 do //produce |V| -1 edges for the tree

u* DeleteMin(Q) //delete the minimum priority element

VT VTU {u*} //expanding the tree, choosing the locally best vertexFor every vertex u in V –VT that is adjacent to u* do

if du*+ w(u*, u) < dudu du+ w(u*, u); pu u*Decrease(Q, u, du)

The time efficiency of Dijkstra’s algorithm depends on the data structure used for implementing the priority queue and for representing an input graph itself.

Example: 4

32 5 6

7 4

Tree Vertices Remaining Vertices

A(-,0) b(a,3) c(-,∞) d(a,7) e(-,∞)

B(a,3) c(b,3+4) d(b,3+2) e(-,∞)

D(b,5) c(b,7) e(d,5+4)

C(b,7) e(d,9)

e(d,9)

Dynamic Programming

Dynamic Programming is a general algorithm design technique. “Programming” here means “planning”. Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems

Main idea: solve several smaller (overlapping) subproblems

31

b c

a d e

Page 32: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

record solutions in a table so that each subproblem is only solved once

final state of the table will be (or contain) solution Dynamic programming vs. divide-and-conquer

partition a problem into overlapping subproblems and independent ones

store and not store solutions to subproblems

This technique is illustrated by the fibonnaci numbers. The Fibonacci numbers are the elements of the sequence

0, 1, 1, 2, 3, 5, 8, 13, 21, 34….

f(0) = 0f(1) = 1f(n) = f(n-1) + f(n-2)

• Computing the nth Fibonacci number recursively (top-down):

f(n)

f(n-1) + f(n-2)

f(n-2) + f(n-3) f(n-3) + f(n-4) ...Computing the nth fibonacci number using bottom-up iteration:

• f(0) = 0• f(1) = 1• f(2) = 0+1 = 1• f(3) = 1+1 = 2• f(4) = 1+2 = 3• f(5) = 2+3 = 5• • • • f(n-2) = • f(n-1) = • f(n) = f(n-1) + f(n-2)

ALGORITHM Fib(n)F[0] ß 0, F[1] ß 1for iß2 to n do

F[i] ß F[i-1] + F[i-2]return F[n]

Examples of Dynamic Programming Algorithms

Computing binomial coefficients

Warshall’s algorithm for transitive closure

32

Page 33: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Floyd’s algorithms for all-pairs shortest paths

Constructing an optimal binary search tree

Some instances of difficult discrete optimization problems: Knapsack

Computing Binomial Coefficients

A binomial coefficient, denoted C(n, k), is the number of combinations of k elements from an n-element set (0 ≤ k ≤ n).Properties of binomial coefficients are recurrence relation

C(n, k) = C(n-1, k-1) + C(n-1, k), for n > k > 0, and C(n, 0) = C(n, n) = 1

Dynamic programming solution: Record the values of the binomial coefficients in a table of

n+1 rows and k+1 columns, numbered from 0 to n and 0 to k respectively.

11 11 2 11 3 3 11 4 6 4 11 5 10 10 5 1

ALGORITHM Binomial(n,k)//Computes C(n,k) by the dynamic programming algorithm//Input: A pair of non negative integers n ≥ k ≥ 0//Output: The value of C(n,k)

for iß0 to n dofor jß0 to min(i,k) do

if j=0 or j=kC[i,j] ß1

else C[i,j] ßC[i-1,j-1] + C[i-1,j]return C[n,k]

Efficiency of this algorithm:

The algorithms basic operation is addition.Let A(n,k) be the total number of additions made by this algorithm in computing C(n,k). Note that because the first k+1 rows of the table form a triangle while the remaining n-k rows form a rectangle, we have to split th sum expressing A(n,k) into two parts:

A(n,k)=

33

Page 34: Design Analysis and Analysis

MC9223/Design and Analysis of Algorithms/Unit I

Examples of Dynamic Programming Algorithms

Computing binomial coefficients

Warshall’s algorithm for transitive closure

Floyd’s algorithms for all-pairs shortest paths

Constructing an optimal binary search tree

Some instances of difficult discrete optimization problems: knapsack

Definition:The transitive closure of a directed graph with n vertices can be defined as the nxn matrix T = {tij}, in which the element in the ith row (1 ≤ i ≤ n) and the jth column (1 ≤ j ≤ n) is 1 if there exists a nontrivial directed path (i.e., a directed path of a positive length) from the ith vertex to the jth vertex; otherwise, tij is 0.

Graph traversal-based algorithm and Warshall’s algorithm

34