True
False
False [Remember, little-o means "less than" not "less than or equal."]
True
False [But if it were big-Omega, it would be true!]
False [Because exponentials differ by more than just a constant factor]
True
True
True
False [see #6]
False [remember the decision tree proof?]
False [it's just constant time]
False [bubble sort is linear in the best case!]
True
2h nodes [This is the case where there is one lone leaf hanging off the left side of the tree. This is just one more than the number of nodes in a complete binary tree of height h-1.]
The "pivot" is the first element in the subarray. The purpose of Partition is to divide the subarray into two parts: the left hand side, where everything is less than or equal to the pivot, and the right hand side, where everything is greater than or equal to the pivot.
A binary tree has the heap property if and only if, for all nodes in the tree, the value of the node is greater than the value of both of the children.
The array can be permuted randomly (in linear time) before Quicksort is started. That way, it is most improbable that the array will end up in one of the few degenerate cases that come up frequently in practice.
Memoizing is a method of instrumenting a recursive algorithm with extra data structures that can hold already computed information. This way, the algorithm can use the computed information rather than recompute it.
No. The original algorithm, which simply does the recursion without the array, will reach O(n) levels of recursion before coming back up for additions. Each level requires an extra stack frame, so O(n) storage is still reqired.
Invoking the sort on an already sorted array - bubble sort is O(n) while Quicksort is O(n2).
No. All triangulations have exactly n-2 triangles. [ But note that what we really care about with this problem is the number of chords; each triangle specifies one or two chords of the polygon, so redundant chords in the output are possible with triangulations. ]
5 / \ 3 9 / \ / \ 1 2 4 6
9 / \ 3 6 / \ / \ 1 2 4 5
index _1___2___3___4___5___6___7_ value |___|___|___|___|___|___|___|
index _1___2___3___4___5___6___7_ value |_9_|_3_|_6_|_1_|_2_|_4_|_5_|
1 Bubble-Sort (A, n) 2 swaps_done = True 3 while swaps_done do 4 swaps_done = False 5 for i in 2..n do 6 if A[i] < A[i-1] then 7 exchange A[i] with A[i-1] 8 swaps_done = True 9 end if 10 end for 11 end whileThe running time of the algorithm can be characterized by the number of comparisons made, i.e., the number of times line 6 is executed.
(n)
The data must already be sorted.
O(n2)
The data must be sorted in reverse order, requiring the maximum amount of time before 'swaps_done' can ever be False
Professor Fubar claims that there is a worst-case lower bound of (n ln n) on the number of array accesses needed to solve this problem.
We know that this is true for a comparison sort, but it's not clear whether this extends to every sorting algorithm.
Professor Emacs refutes the claim with this argument: Arrange the integers as a list of strings of digits, using leading zeros when needed, then use radix sort. We know radix sort makes (c n) array accesses where c is the number of digits per item. If c is a constant, then only (n) array accesses are needed.
Is this a valid refutation of the claim? Explain your answer in two or three brief sentences. Remember, we interested only in this refutation, so don't try to think of a different algorithm.
No. Since the integers are all unique, we know that at least one of them must be greater than n (if they were all less than n, then there would have to be duplicates). So a lower bound on the value of c for radix sort is the number of digits in n, i.e., floor(1 + log10 n). So radix sort would take (n log10 n) = (n ln n) time, which is what Professor Fubar predicted.
1 Sort (A, i, n) 2 if i = n then 3 for j in 2..n do 4 if A[j-1] > A[j] then return False 5 end for 6 return True 7 else 8 for j in i..n do 9 exchange A[i] with A[j] 10 if Sort (A, i+1, n) then return 1 11 exchange A[i] with A[j] 12 end for 13 return False 14 end ifNote that the value of i recursively increases from 1 to n (see line 10). The recursion stops when i reaches n; at this point, the array is checked to see if it is in order. If it isn't, False is returned and the algorithm continues. If it is, True is returned and all the recursion "unwinds" and stops, leaving the array in sorted order. The when i isn't equal to n, the algorithm tries exchanging every element in A[i..n] with A[i], seeing if the resulting array is (recursively) sorted (and if so, returning with True), then restoring the array to its original order to try again. This just generates every possible permutation of A[1..n], testing each one to see if it is sorted.
Give upper and lower bounds on the number of times line 4 is executed. Your bounds should be within a factor of n of each other. You may assume that, on average, half of the permutations must be checked before the sorted one is found.
Line 4 is part of the loop that checks whether the array is sorted. It is executed at least once every time the loop is reached. Since the algorithm simply goes through all possible permutations of the array, line 4 is executed at least (n!) times (exactly 1/2 n! times if we find the sorted permutation halfway through the set of permutations).Since line 4 is in a loop that might not end until n comparisons have been done, and the "for" loop is done 1/2 n! times, an upper bound is O(n n!), or O(n2 (n-1)!). [ Note that this is a mind-numbingly inefficient algorithm! Note also that finding the right answer quickly is just a matter of what to ignore in the text of the problem. Lines 7 through 14 of the algorithm, an opaque jumble of recursive invokations, can be ignored since the problem itself states that "this just generates every possible permutation" of the array, and we know that there are n! permutations of n things.]
[(n-1) C (k-1)] + [(n-1) C k], if 0 < k < nWrite a memoized recursive function that computes the binomial coefficient. What is a tight bound on the running time of your algorithm? On the space required for your algorithm?
1, otherwise
memo_bin[1..n][1..k] is initialized to all -1 Binomial-Coefficient (n, k) if k <= 0 or n <= k then return 1 end if if memo_bin[n][k] == -1 then memo_bin[n][k] = Binomial-Coefficient (n-1, k-1) + Binomial-Coefficient (n-1, k) end if return memo_bin[n][k] Each recursive call "paints" the entire array with values. The first one goes "down and to the left," starting a new column; the second one continues "painting" the current column. Since no value is computed more than once due to the memoization, the time is the same as the storage for the memo_bin array: (n k). [ Since k < n, an upper bound is O(n2). ]
Counting-Sort (v, n) c = array [1..1000] of integers, initialized to all zero for i in 1..n do c[A[i]]++ end j = 1 for i in 1..1000 do while c[i] != 0 do A[j++] = i c[i]-- end while end for The first loop takes (n). The second loop also takes (n) since the sum of the elements in c[] is n; thus A[] is accessed n times. So the running time of the whole algorithm is (n).