Linear time sorting in DAA

Linear-Time Sorting
  • Counting Sort
  • Radix Sort
  • Bucket Sort

Linear-Time Sorting


There are sorting algorithms that run faster than O(lg n) time but they require special assumptions about the input sequence to be sort. Examples of sorting algorithms that run in linear time are counting sort, radix sort and bucket sort. Counting sort and radix sort assume that the input consists of integers in a small range. Whereas, bucket sort assumes that the input is generated by a random process that distributes elements uniformly over the interval.
Since we already know that the best comparison-based sorting can to is Ω(lg n). It is not difficult to figure out that linear-time sorting algorithms use operations other than comparisons to determine the sorted order.
Despite of linear time usually these algorithms are not very desirable from practical point of view. Firstly, the efficiency of linear-time algorithms depend on the keys randomly ordered. If this condition is not satisfied, the result is the degrading in performance. Secondly, these algorithms require extra space proportional to the size of the array being sorted, so if we are dealing with large file, extra array becomes a real liability. Thirdly, the "inner-loop" of these algorithms contain quite a few instructions, so even though they are linear, they would not be as faster than quick sort (say).


Counting Sort


Counting sort assumes that each of the elements is an integer in the range 1 to k, for some integer k. When k = O(n), the Counting-sort runs in O(n) time. The basic idea of Counting sort is to determine, for each input elements x, the number of elements less than x. This information can be used to place directly into its correct position. For example, if there 17 elements less than x, than x belongs in output position 18.
In the code for Counting sort, we are given array A[1 . . n] of length n. We required two more arrays, the array B[1 . . n] holds the sorted output and the array c[1 . . k] provides temporary working storage.

COUNTING_SORT (ABk)
  1. for i ← 1 to k do
  2.     c[i← 0
  3. for j ← 1 to n do
  4.     c[A[j]] ← c[A[j]] + 1
  5. //c[i] now contains the number of elements equal to i
  6. for i ← 2 to k do
  7.     c[i← c[i] + c[i-1]
  8. // c[i] now contains the number of elements ≤ i
  9. for j ← n downto 1 do
  10.     B[c[A[i]]] ← A[j]
  11.     c[A[i]] ← c[A[j]] - 1

Each line below shows the step by step operation of counting sort.

A36413414C202301

 C224778


B      4 C224678


B 1    4 C124678


B 1   44 C224578


B 11 33444 6

 

 

Analysis

  1. The loop of lines 1-2   takes O(k) time
  2. The loop of lines 3-4   takes O(n) time
  3. The loop of lines 6-7   takes O(k) time
  4. The loop of lines 9-11 takes O(n) time
Therefore, the overall time of the counting sort is O(k) + O(n) + O(k) + O(n) = O(+ n)

In practice, we usually use counting sort algorithm when have k = O(n), in which case running time is O(n).
The Counting sort is a stable sort i.e., multiple keys with the same value are placed in the sorted array in the same order that they appear in the input array.
Suppose that the for-loop in line 9 of the Counting sort is rewritten:
9    for j  1 to n
then the stability no longer holds. Notice that the correctness of argument in the CLR does not depend on the order in which array A[1 . . n] is processed. The algorithm is correct no matter what order is used. In particular, the modified algorithm still places the elements with value k in position c[k - 1] + 1 through c[k], but in reverse order of their appearance in A[1 . . n].
Note that Counting sort beats the lower bound of  Ω(lg n), because it is not a comparison sort. There is no comparison between elements. Counting sort uses the actual values of the elements to index into an array.


|

Radix Sort


Radix sort is a small method that many people intuitively use when alphabetizing a large list of names. (Here Radix is 26, 26 letters of alphabet). Specifically, the list of names is first sorted according to the first letter of each names, that is, the names are arranged in 26 classes. Intuitively, one might want to sort numbers on their most significant digit. But Radix sort do counter-intuitively by sorting on the least significant digits first. On the first pass entire numbers sort on the least significant digit and combine in a array. Then on the second pass, the entire numbers are sorted again on the second least-significant digits and combine in a array and so on.
Following example shows how Radix sort operates on seven 3-digits number.

INPUT1st pass2nd pass3rd pass
329720720329
457355329355
657436436436
839457839457
436657355657
720329457720
355839657839

In the above example, the first column is the input. The remaining shows the list after successive sorts on increasingly significant digits position. The code for Radix sort assumes that each element in the n-element array A has d digits, where digit 1 is the lowest-order digit and d is the highest-order digit.

RADIX_SORT (A, d)
for i ← 1 to d do
    use a stable sort to sort A on digit i
    // counting sort will do the job

 

 

Analysis

The running time depends on the stable used as an intermediate sorting algorithm. When each digits is in the range 1 to k, and k is not too large, COUNTING_SORT  is the obvious choice. In case of counting sort, each pass over n d-digit numbers takes O(+ k) time. There are d passes, so the total time for Radix sort is (n+k) time. There are d passes, so the total time for Radix sort is (dn+kd). When d is constant and k = (n), the Radix sort runs in linear time.


Bucket Sort


Bucket sort runs in linear time on the average. It assumes that the input is generated by a random process that distributes elements uniformly over the interval [0, 1).
The idea of Bucket sort is to divide the interval [0, 1) into n equal-sized subintervals, or buckets, and then distribute the n input numbers into the buckets. Since the inputs are uniformly distributed over (0, 1), we don't expect many numbers to fall into each bucket. To produce the output, simply sort the numbers in each bucket and then go through the bucket in order, listing the elements in each.
The code assumes that input is in n-element array A and each element in A satisfies 0 ≤ A[i] ≤ 1. We also need an auxiliary array B[0 . . -1] for linked-lists (buckets).

BUCKET_SORT (A)
  1. n ← length [A]
  2. For i = 1 to n do
  3.     Insert A[i] into list B[nA[i]]
  4. For i = 0 to n-1 do
  5.     Sort list B with Insertion sort
  6. Concatenate the lists B[0], B[1], . . B[n-1] together in order.

 

Example

Given input array A[1..10]. The array B[0..9] of sorted lists or buckets after line 5. Bucket i holds values in the interval [i/10, (+1)/10]. The sorted output consists of a concatenation in order of the lists first B[0] then B[1] then B[2] ... and the last one is B[9].

 

Analysis

All lines except line 5 take O(n) time in the worst case. We can see inspection that total time to examine all buckets in line 5 is O(n-1) i.e., O(n).
The only interesting part of the analysis is the time taken by Insertion sort in line 5. Let ni be the random variable denoting the number of elements in the bucket B[i]. Since the expected time to sort by INSERTION_SORT is O(n2), the expected time to sort the elements in bucket B[i] is

E[O(2ni)] = O(E[2ni]]

Therefore, the total expected time to sort all elements in all buckets is

        n-1i=0 O(E[2ni]) = O n-1i=0 (E[2ni])         ------------ A


In order to evaluate this summation, we must determine the distribution of each random variable n
We have n elements and n buckets. The probability that a given element falls in a bucket B[i] is 1/i.e., Probability = = 1/n. (Note that this problem is the same as that of "Balls-and-Bin" problem).
Therefore, the probability follows the binomial distribution, which has
    mean:       E[ni] = np = 1
    variance:  
Var[ni] = np(1- p) = 1- 1/n
For any random variable, we have
E[2ni] = Var[ni] +  E2[ni]
          = 1 - 1/n + 12
          = 2 - 1/n
          = (1)


Putting this value in equation A above, (do some tweaking) and we have a expected time for INSERTION_SORT, O(n).

Now back to our original problem
In the above Bucket sort algorithm, we observe
T(n) = [time to insert n elements in array A] + [time to go through auxiliary array B[0 . . n-1] * (Sort by INSERTION_SORT)
       = O(n) + (n-1) 
(n)
       = O(n)

Therefore, the entire Bucket sort algorithm runs in linear expected time.



Tags: , ,

Laxman Singh

0 comments

Leave a Reply

Related Posts Plugin for WordPress, Blogger...