开发者

Non-Recursive Merge Sort

开发者 https://www.devze.com 2022-12-08 03:22 出处:网络
Can someone explain in English how does Non-Recursive merge sort works 开发者_Go百科? ThanksNon-recursive merge sort works by considering window sizes of 1,2,4,8,16..2^n over the input array. For eac

Can someone explain in English how does Non-Recursive merge sort works 开发者_Go百科?

Thanks


Non-recursive merge sort works by considering window sizes of 1,2,4,8,16..2^n over the input array. For each window ('k' in code below), all adjacent pairs of windows are merged into a temporary space, then put back into the array.

Here is my single function, C-based, non-recursive merge sort. Input and output are in 'a'. Temporary storage in 'b'. One day, I'd like to have a version that was in-place:

float a[50000000],b[50000000];
void mergesort (long num)
{
    int rght, wid, rend;
    int i,j,m,t;

    for (int k=1; k < num; k *= 2 ) {       
        for (int left=0; left+k < num; left += k*2 ) {
            rght = left + k;        
            rend = rght + k;
            if (rend > num) rend = num; 
            m = left; i = left; j = rght; 
            while (i < rght && j < rend) { 
                if (a[i] <= a[j]) {         
                    b[m] = a[i]; i++;
                } else {
                    b[m] = a[j]; j++;
                }
                m++;
            }
            while (i < rght) { 
                b[m]=a[i]; 
                i++; m++;
            }
            while (j < rend) { 
                b[m]=a[j]; 
                j++; m++;
            }
            for (m=left; m < rend; m++) { 
                a[m] = b[m]; 
            }
        }
    }
}

By the way, it is also very easy to prove this is O(n log n). The outer loop over window size grows as power of two, so k has log n iterations. While there are many windows covered by inner loop, together, all windows for a given k exactly cover the input array, so inner loop is O(n). Combining inner and outer loops: O(n)*O(log n) = O(n log n).


Loop through the elements and make every adjacent group of two sorted by swapping the two when necessary.

Now, dealing with groups of two groups (any two, most likely adjacent groups, but you could use the first and last groups) merge them into one group be selecting the lowest valued element from each group repeatedly until all 4 elements are merged into a group of 4. Now, you have nothing but groups of 4 plus a possible remainder. Using a loop around the previous logic, do it all again except this time work in groups of 4. This loop runs until there is only one group.


Quoting from Algorithmist:

Bottom-up merge sort is a non-recursive variant of the merge sort, in which the array is sorted by a sequence of passes. During each pass, the array is divided into blocks of size m. (Initially, m = 1). Every two adjacent blocks are merged (as in normal merge sort), and the next pass is made with a twice larger value of m.


Both recursive and non-recursive merge sort have same time complexity of O(nlog(n)). This is because both the approaches use stack in one or the other manner.

In non-recursive approach the user/programmer defines and uses stack

In Recursive approach stack is used internally by the system to store return address of the function which is called recursively


The main reason you would want to use a non-recursive MergeSort is to avoid recursion stack overflow. I for example am trying to sort 100 million records, each record about 1 kByte in length (= 100 gigabytes), in alphanumeric order. An order(N^2) sort would take 10^16 operations, ie it would take decades to run even at 0.1 microsecond per compare operation. An order (N log(N)) Merge Sort will take less than 10^10 operations or less than an hour to run at the same operational speed. However, in the recursive version of MergeSort, the 100 million element sort results in 50-million recursive calls to the MergeSort( ). At a few hundred bytes per stack recursion, this overflows the recursion stack even though the process easily fits within heap memory. Doing the Merge sort using dynamically allocated memory on the heap-- I am using the code provided by Rama Hoetzlein above, but I am using dynamically allocated memory on the heap instead of using the stack-- I can sort my 100 million records with the non-recursive merge sort and I don't overflow the stack. An appropriate conversation for website "Stack Overflow"!

PS: Thanks for the code, Rama Hoetzlein.

PPS: 100 gigabytes on the heap?!! Well, it's a virtual heap on a Hadoop cluster, and the MergeSort will be implemented in parallel on several machines sharing the load...


I am new here. I have modified Rama Hoetzlein solution( thanks for the ideas ). My merge sort does not use the last copy back loop. Plus it falls back on insertion sort. I have benchmarked it on my laptop and it is the fastest. Even better than the recursive version. By the way it is in java and sorts from descending order to ascending order. And of course it is iterative. It can be made multithreaded. The code has become complex. So if anyone interested, please have a look.

Code :

    int num = input_array.length;
    int left = 0;
    int right;
    int temp;
    int LIMIT = 16;
    if (num <= LIMIT)
    {
        // Single Insertion Sort
        right = 1;
        while(right < num)
        {
            temp = input_array[right];
            while(( left > (-1) ) && ( input_array[left] > temp ))
            {
                input_array[left+1] = input_array[left--];
            }
            input_array[left+1] = temp;
            left = right;
            right++;
        }
    }
    else
    {
        int i;
        int j;
        //Fragmented Insertion Sort
        right = LIMIT;
        while (right <= num)
        {
            i = left + 1;
            j = left;
            while (i < right)
            {
                temp = input_array[i];
                while(( j >= left ) && ( input_array[j] > temp ))
                {
                    input_array[j+1] = input_array[j--];
                }
                input_array[j+1] = temp;
                j = i;
                i++;
            }
            left = right;
            right = right + LIMIT;
        }
        // Remainder Insertion Sort
        i = left + 1;
        j = left;
        while(i < num)
        {
            temp = input_array[i];
            while(( j >= left ) && ( input_array[j] > temp ))
            {
                input_array[j+1] = input_array[j--];
            }
            input_array[j+1] = temp;
            j = i;
            i++;
        }
        // Rama Hoetzlein method
        int[] temp_array = new int[num];
        int[] swap;
        int k = LIMIT;
        while (k < num)
        {
            left = 0;
            i = k;// The mid point
            right = k << 1;
            while (i < num)
            {
                if (right > num)
                {
                    right = num;
                }
                temp = left;
                j = i;
                while ((left < i) && (j < right))
                {
                    if (input_array[left] <= input_array[j])
                    {
                        temp_array[temp++] = input_array[left++];
                    }
                    else
                    {
                        temp_array[temp++] = input_array[j++];
                    }
                }
                while (left < i)
                {
                    temp_array[temp++] = input_array[left++];
                }
                while (j < right)
                {
                    temp_array[temp++] = input_array[j++];
                }
                // Do not copy back the elements to input_array
                left = right;
                i = left + k;
                right = i + k;
            }
            // Instead of copying back in previous loop, copy remaining elements to temp_array, then swap the array pointers
            while (left < num)
            {
                temp_array[left] = input_array[left++];
            }
            swap = input_array;
            input_array = temp_array;
            temp_array = swap;
            k <<= 1;
        }
    }

    return input_array;


Just in case anyone's still lurking in this thread ... I've adapted Rama Hoetzlein's non-recursive merge sort algorithm above to sort double linked lists. This new sort is in-place, stable and avoids time costly list dividing code that's in other linked list merge sorting implementations.

// MergeSort.cpp
// Angus Johnson 2017
// License: Public Domain

#include "io.h"
#include "time.h"
#include "stdlib.h"

struct Node {
    int data;
    Node *next;
    Node *prev;
    Node *jump;
};

inline void Move2Before1(Node *n1, Node *n2)
{
    Node *prev, *next;
    //extricate n2 from linked-list ...
    prev = n2->prev;
    next = n2->next;
    prev->next = next; //nb: prev is always assigned
    if (next) next->prev = prev;
    //insert n2 back into list ...  
    prev = n1->prev;
    if (prev) prev->next = n2;
    n1->prev = n2;
    n2->prev = prev;
    n2->next = n1;
}

void MergeSort(Node *&nodes)
{
    Node *first, *second, *base, *tmp, *prev_base;

    if (!nodes || !nodes->next) return;
    int mul = 1;
    for (;;) {
        first = nodes;
        prev_base = NULL;
        //sort each successive mul group of nodes ...
        while (first) {
            if (mul == 1) {
                second = first->next;
                if (!second) { 
                  first->jump = NULL;
                  break;
                }
                first->jump = second->next;
            }
            else
            {
                second = first->jump;
                if (!second) break;
                first->jump = second->jump;
            }
            base = first;
            int cnt1 = mul, cnt2 = mul;
            //the following 'if' condition marginally improves performance 
            //in an unsorted list but very significantly improves
            //performance when the list is mostly sorted ...
            if (second->data < second->prev->data) 
                while (cnt1 && cnt2) {
                    if (second->data < first->data) {
                        if (first == base) {
                            if (prev_base) prev_base->jump = second;
                            base = second;
                            base->jump = first->jump;
                            if (first == nodes) nodes = second;
                        }
                        tmp = second->next;
                        Move2Before1(first, second);
                        second = tmp;
                        if (!second) { first = NULL; break; }
                        --cnt2;
                    }
                    else
                    {
                        first = first->next;
                        --cnt1;
                    }
                } //while (cnt1 && cnt2)
            first = base->jump;
            prev_base = base;
        } //while (first)
        if (!nodes->jump) break;
        else mul <<= 1;
    } //for (;;)
}

void InsertNewNode(Node *&head, int data)
{
    Node *tmp = new Node;
    tmp->data = data;
    tmp->next = NULL;
    tmp->prev = NULL;
    tmp->jump = NULL;
    if (head) {
        tmp->next = head;
        head->prev = tmp;
        head = tmp;
    }
    else head = tmp;
}

void ClearNodes(Node *head)
{
    if (!head) return;
    while (head) {
        Node *tmp = head;
        head = head->next;
        delete tmp;
    }
}

int main()
{  
    srand(time(NULL));
    Node *nodes = NULL, *n;
    const int len = 1000000; //1 million nodes 
    for (int i = 0; i < len; i++)
        InsertNewNode(nodes, rand() >> 4);

    clock_t t = clock();
    MergeSort(nodes);    //~1/2 sec for 1 mill. nodes on Pentium i7. 
    t = clock() - t;
    printf("Sort time: %d msec\n\n", t * 1000 / CLOCKS_PER_SEC);

    n = nodes;
    while (n)
    {
        if (n->prev && n->data < n->prev->data) { 
            printf("oops! sorting's broken\n"); 
            break;
        }
        n = n->next;
    }
    ClearNodes(nodes);
    printf("All done!\n\n");
    getchar();
    return 0;
}

Edited 2017-10-27: Fixed a bug affecting odd numbered lists


Any interest in this anymore? Probably not. Oh well. Here goes nothing.

The insight of merge-sort is that you can merge two (or several) small sorted runs of records into one larger sorted run, and you can do so with simple stream-like operations "read first/next record" and "append record" -- which means you don't need a big data set in RAM at once: you can get by with just two records, each taken from a distinct run. If you can just keep track of where in your file the sorted runs start and end, you can simply merge pairs of adjacent runs (into a temp file) repeatedly until the file is sorted: this takes a logarithmic number of passes over the file.

A single record is trivially sorted: each time you merge two adjacent runs, the size of each run doubles. So that's one way to keep track. The other is to work on a priority queue of runs. Take the two smallest runs from the queue, merge them, and enqueue the result -- until there is only one remaining run. This is appropriate if you expect your data to naturally start with sorted runs.

In practice with enormous data sets you'll want to exploit the memory hierarchy. Suppose you have gigabytes of RAM and terabytes of data. Why not merge a thousand runs at once? Indeed you can do this, and a priority-queue of runs can help. That will significantly decrease the number of passes you have to make over a file to get it sorted. Some details are left as an exercise for the reader.

0

精彩评论

暂无评论...
验证码 换一张
取 消