3

This article mainly describes the idea, implementation and complexity of eight common sorting algorithms in detail.

Bubble Sort

gist

Bubble sort is an exchange sort.

What is a swap sort?

Swap sort: Compare the keys to be sorted in pairs, and swap the pairs that do not meet the order requirements until the entire table meets the order requirements.

Algorithmic thinking

It repeatedly walks through the sequence to be sorted, comparing two elements at a time, and swapping them if they are in the wrong order. The work of visiting the sequence is repeated until no more exchanges are needed, that is, the sequence has been sorted.

The name of this algorithm comes from the fact that the smaller elements will slowly "float" to the top of the sequence through the exchange, hence the name.

Suppose there is an unordered sequence of size N. Bubble sort is to find the i-th small (large) element through pairwise comparison in each sorting process, and arrange it up.

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java5-1564366173.jpg" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

Take the above figure as an example to demonstrate the actual process of bubble sort:

Suppose there is an unordered sequence { 4. 3. 1. 2, 5 }

<li>
The first sorting: through pairwise comparison, find the first smallest value 1, and put it in the first place of the sequence.
</li>
<li>
The second sorting: by pairwise comparison, find the second smallest value 2, and put it in the second position of the sequence.
</li>
<li>
The third round of sorting: through pairwise comparison, find the third smallest value 3, and place it in the third position of the sequence.
</li>

At this point, all elements have been sorted, and the sorting is over.

To convert the above process into code, we need to think like a machine, otherwise the compiler can't understand it.

Suppose you want to sort an unordered sequence of size N in ascending order (ie, from small to large).

<li>
In each sorting process, the i-th smallest element needs to be found by comparison.
</li>
<li>
Therefore, we need an outer loop, starting from the beginning of the array (subscript 0), scanning to the second-to-last element (ie, subscript N - 2), and the last element left, which must be the largest.
</li>

Assuming that it is the i-th sorting, it can be seen that the first i-1 elements are already sorted. Now to find the ith element, just start from the end of the array, scan to the ith element, and compare them pairwise.

<li>
So, an inner loop is required, starting at the end of the array (index N - 1) and scanning to (index i + 1).
</li>

public void bubbleSort(int[] list) {
    int temp = 0; // 用来交换的临时数

    // 要遍历的次数
    for (int i = 0; i  list.length - 1; i++) {
        // 从后向前依次的比较相邻两个数的大小,遍历一次后,把数组中第i小的数放在第i个位置上
        for (int j = list.length - 1; j  i; j--) {
            // 比较相邻的元素,如果前面的数大于后面的数,则交换
            if (list[j - 1]  list[j]) {
                temp = list[j - 1];
                list[j - 1] = list[j];
                list[j] = temp;
            }
        }

        System.out.format("第 %d 趟:t", i);
        printAll(list);
    }
}

Analysis of Algorithms

The performance of the bubble sort algorithm

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java4-1564366173.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

time complexity

If the initial state of the file is in positive order, the sorting can be completed in one scan. The required number of keyword comparisons, C, and record moves, M, both reach the minimum value: Cmin = N - 1, Mmin = 0. Therefore, the best time complexity of bubble sort is O(N).

If the original file is in reverse order, N -1 sorting is required. Each sorting requires N - i keyword comparisons (1 ≤ i ≤ N - 1), and each comparison must move the record three times to exchange the record position. In this case, both comparisons and moves are at their maximum:

<p style="font-size: inherit;color: inherit;line-height: inherit;">Cmax = N(N-1)/2 = O(N2)
Mmax = 3N(N-1)/2 = O(N2)</p>

The worst time complexity of bubble sort is O(N2). Therefore, the average time complexity of bubble sort is O(N2).

To sum up, it is actually a sentence: When the data is closer to the positive sequence, the better the bubble sort performance.

Algorithm Stability

Bubble sort is to move small elements forward or large elements backward. A comparison is a comparison of two adjacent elements, and a swap also occurs between these two elements.

So the order of the same elements does not change, so bubble sort is a stable sorting algorithm.

optimization

A common improvement method for bubble sorting is to add a symbolic variable exchange, which is used to mark whether there is data exchange in a certain sorting process.

If there is no data exchange during a certain sorting, it means that all the data is in order, and the sorting can be ended immediately to avoid unnecessary comparison process.

core code

// 对 bubbleSort 的优化算法
public void bubbleSort_2(int[] list) {
    int temp = 0; // 用来交换的临时数
    boolean bChange = false; // 交换标志

    // 要遍历的次数
    for (int i = 0; i  list.length - 1; i++) {
        bChange = false;
        // 从后向前依次的比较相邻两个数的大小,遍历一次后,把数组中第i小的数放在第i个位置上
        for (int j = list.length - 1; j  i; j--) {
            // 比较相邻的元素,如果前面的数大于后面的数,则交换
            if (list[j - 1]  list[j]) {
                temp = list[j - 1];
                list[j - 1] = list[j];
                list[j] = temp;
                bChange = true;
            }
        }

        // 如果标志为false,说明本轮遍历没有交换,已经是有序数列,可以结束排序
        if (false == bChange)
            break;

        System.out.format("第 %d 趟:t", i);
        printAll(list);
    }
}

sample code

https://github.com/dunwu/algorithm-tutorial/blob/master/codes/algorithm/src/test/java/io/github/dunwu/algorithm/sort/SortStrategyTest.java

The samples include: the case where the number of arrays is odd or even; the case where the elements are repeated or not. And the samples are random samples, the actual measurement is valid.

quicksort

gist

Quicksort is an exchange sort.

Quicksort was proposed by CAR Hoare in 1962.

Algorithmic thinking

Its basic idea is:

Divide the data to be sorted into two independent parts by one-pass sorting: the left side of the split point is smaller than it, and the right side is larger than it.

Then, the two parts of data are quickly sorted by this method, and the entire sorting process can be performed recursively, so that the entire data becomes an ordered sequence.

Detailed diagrams are often more descriptive than a lot of text, so go directly to the picture:

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java1-1564366174.jpg" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

The above figure demonstrates the process of quick sort:

<li>
The initial state is an unordered array: 2, 4, 5, 1, 3.
</li>
<li>
After the above operation steps, the first sorting is completed, and a new array is obtained: 1, 2, 5, 4, 3.
</li>
<li>
In the new array, with 2 as the dividing point, the left side are all numbers smaller than 2, and the right side are all numbers larger than 2.
</li>
<li>
Since 2 has already found a suitable position in the array, there is no need to move it.
</li>
<li>
The array on the left of 2 has only one element, 1, so obviously no sorting is required, and the position is determined. (Note: In this case, the left pointer and the right pointer are obviously coincident. Therefore, in the code, we can set the judgment condition that left must be less than right. If it is not satisfied, there is no need to sort).
</li>
<li>
For the arrays 5, 4, and 3 on the right side of 2, set the left to point to 5 and the right to point to 3, and start to repeat the first, second, third, and fourth steps in the figure to sort the new array.
</li>

core code

public int division(int[] list, int left, int right) {
    // 以最左边的数(left)为基准
    int base = list[left];
    while (left  right) {
        // 从序列右端开始,向左遍历,直到找到小于base的数
        while (left  right && list[right] = base)
            right--;
        // 找到了比base小的元素,将这个元素放到最左边的位置
        list[left] = list[right];

        // 从序列左端开始,向右遍历,直到找到大于base的数
        while (left  right && list[left] = base)
            left++;
        // 找到了比base大的元素,将这个元素放到最右边的位置
        list[right] = list[left];
    }

    // 最后将base放到left位置。此时,left位置的左侧数值应该都比left小;
    // 而left位置的右侧数值应该都比left大。
    list[left] = base;
    return left;
}

private void quickSort(int[] list, int left, int right) {

    // 左下标一定小于右下标,否则就越界了
    if (left  right) {
        // 对数组进行分割,取出下次分割的基准标号
        int base = division(list, left, right);

        System.out.format("base = %d:t", list[base]);
        printPart(list, left, right);

        // 对“基准标号“左侧的一组数值进行递归的切割,以至于将这些数值完整的排序
        quickSort(list, left, base - 1);

        // 对“基准标号“右侧的一组数值进行递归的切割,以至于将这些数值完整的排序
        quickSort(list, base + 1, right);
    }
}

Analysis of Algorithms

Quick Sort Algorithm Performance

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java5-1564366175.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

time complexity

When the data is ordered, it is divided into two subsequences based on the first keyword, and the former subsequence is empty, and the execution efficiency is the worst at this time.

When the data is randomly distributed, it is divided into two subsequences based on the first keyword, and the number of elements of the two subsequences is nearly equal, and the execution efficiency is the best at this time.

Therefore, the more randomly distributed the data, the better the quick sort performance; the closer the data is to the order, the worse the quick sort performance.

space complexity

Quick sort requires 1 space to store the reference value during each split. The quick sort needs about Nlog2N division processing, so the space occupied is also Nlog2N.

Algorithm Stability

In quicksort, equal elements may swap order due to partitions, so it is an unstable algorithm.

sample code

https://github.com/dunwu/algorithm-tutorial/blob/master/codes/algorithm/src/test/java/io/github/dunwu/algorithm/sort/SortStrategyTest.java

The samples include: the case where the number of arrays is odd or even; the case where the elements are repeated or not. And the samples are random samples, the actual measurement is valid.

Insertion sort

gist

Direct insertion sort is the simplest kind of insertion sort.

Insertion sort: Insert a record to be sorted into the appropriate position of the sorted queue according to the size of its key in each pass until all the insertions are completed.

Algorithmic thinking

Before explaining direct insertion sort, let's make up our minds about the process of playing cards.

<img class="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java2-1564366176.jpeg" src="https://www.javazhiyin.com/wp- content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? Reading this is enough" title="Can't write the sorting algorithm during the interview? Reading this is enough" >

<li>
First take a 5 in your hand,
</li>
<li>
Touch another 4, smaller than 5, insert it in front of 5,
</li>
<li>
Touch a 6, um, bigger than 5, insert it behind 5,
</li>
<li>
Touch an 8, bigger than 6, insert it behind 6,
</li>
<li>
。。。
</li>
<li>
At the last look, I rely on it, I got a straight flush, which is awesome.
</li>

The above process is actually a typical direct insertion sort, each time a new data is inserted into the appropriate position in the ordered queue.

Very simple, next, we need to translate this algorithm into a programming language.

Suppose there is a set of unordered sequences R0, R1, … , RN-1.

<li>
We first treat the element with subscript 0 in this sequence as an ordered sequence with the number of elements being 1.
</li>
<li>
Then, we need to insert R1, R2, … , RN-1 into this ordered sequence in turn. So, we need an outer loop to scan from index 1 to N-1.
</li>
<li>
Next, the insertion process is described. Suppose this is to insert Ri into the previously ordered sequence. From the above, we know that when Ri is inserted, the first i-1 numbers must be in order.
</li>

So we need to compare Ri with R0 ~ Ri-1 to determine the right place to insert. This requires an inner loop, we generally compare from the back to the front, that is, start scanning to 0 from the subscript i-1. (For more learning about common sorting algorithms, you can reply to "Sort Algorithm Aggregation" on the Java Zhiyin Public Account)

core code

public void insertSort(int[] list) {
   // 打印第一个元素
   System.out.format("i = %d:t", 0);
   printPart(list, 0, 0);

   // 第1个数肯定是有序的,从第2个数开始遍历,依次插入有序序列
   for (int i = 1; i  list.length; i++) {
       int j = 0;
       int temp = list[i]; // 取出第i个数,和前i-1个数比较后,插入合适位置

       // 因为前i-1个数都是从小到大的有序序列,所以只要当前比较的数(list[j])比temp大,就把这个数后移一位
       for (j = i - 1; j = 0 && temp  list[j]; j--) {
           list[j + 1] = list[j];
       }
       list[j + 1] = temp;

       System.out.format("i = %d:t", i);
       printPart(list, 0, i);
   }
}

Analysis of Algorithms

Algorithm performance of direct insertion sort

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java7-1564366177.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

time complexity

When the data is in positive order, the execution efficiency is the best, each insertion does not need to move the previous elements, and the time complexity is O(N).

When the data is in reverse order, the execution efficiency is the worst, and the previous elements are moved backward for each insertion, and the time complexity is O(N2).

Therefore, the closer the data is to positive order, the better the performance of the direct insertion sort algorithm.

space complexity

It can be seen from the direct insertion sorting algorithm that during the sorting process, we need a temporary variable to store the value to be inserted, so the space complexity is 1.

Algorithm Stability

In the process of direct insertion sort, there is no need to change the position of equal numerical elements, so it is a stable algorithm.

sample code

https://github.com/dunwu/algorithm-tutorial/blob/master/codes/algorithm/src/test/java/io/github/dunwu/algorithm/sort/SortStrategyTest.java

The samples include: the case where the number of arrays is odd or even; the case where the elements are repeated or not. And the samples are random samples, the actual measurement is valid.

Hill sort

gist

Shell sort, also known as shrinking incremental sort, is an insertion sort. It is a more powerful version of the direct insertion sort algorithm.

This method is due to DL. Shell was named after it was proposed in 1959.

Algorithmic thinking

The basic idea of Hill sort is:

The records are grouped according to the step size gap, and each group of records is sorted by the direct insertion sort method.

As the step size gradually decreases, the divided groups contain more and more records. When the value of the step size decreases to 1, the entire data is combined into a group to form a group of ordered records, and the sorting is completed.

Let's have a deeper understanding of this process through the demonstration diagram.

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java6-1564366177.jpg" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

In the picture above:

Initially, there is an unordered sequence of size 10.

In the first sorting, we might as well set gap1 = N / 2 = 5, that is, elements with a distance of 5 form a group, which can be divided into 5 groups.

<li>
Next, sort each group according to inline sort.
</li>

In the second sorting of , we reduce the last gap by half, that is, gap2 = gap1 / 2 = 2 (integer). In this way, elements separated by a distance of 2 form a group, which can be divided into 2 groups.

<li>
Sort each group according to inline sort.
</li>

In the third round of sorting, the gap is reduced by half again, that is, gap3 = gap2 / 2 = 1. In this way, elements separated by a distance of 1 form a group, that is, there is only one group.

<li>
Sort each group according to inline sort. At this point, the sorting has ended.
</li>

Note that there are two elements 5 and 5 of equal value in the graph. We can clearly see that during the sorting process, the positions of the two elements are swapped.

So, Hill sort is an unstable algorithm.

core code

public void shellSort(int[] list) {
   int gap = list.length / 2;

   while (1 = gap) {
       // 把距离为 gap 的元素编为一个组,扫描所有组
       for (int i = gap; i  list.length; i++) {
           int j = 0;
           int temp = list[i];

           // 对距离为 gap 的元素组进行排序
           for (j = i - gap; j = 0 && temp  list[j]; j = j - gap) {
               list[j + gap] = list[j];
           }
           list[j + gap] = temp;
       }

       System.out.format("gap = %d:t", gap);
       printAll(list);
       gap = gap / 2; // 减小增量
   }
}

Analysis of Algorithms

Algorithm performance of Hill sort

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java7-1564366178.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

time complexity

The choice of step size is an important part of Hill sorting. Any sequence of steps will work as long as the final step is 1.

The algorithm initially sorts with a certain step size. Then it continues to sort with a certain step size, and the final algorithm sorts with a step size of 1. When the step size is 1, the algorithm becomes insertion sort, which guarantees that the data will be sorted.

Donald Shell initially suggested choosing a step size of N/2 and taking the step size in half until the step size reaches 1. While this can be better than an O(N2)-like algorithm (insertion sort), there is still room to reduce the average and worst-case times. Probably the most important thing about Hill sorting is that when sorting with a smaller step size, the previously used larger step size is still sorted.

For example, if a sequence is sorted by step 5 and then by step 3, then the sequence is not only sorted by step 3, but also by step 5. If it didn't, then the algorithm would shuffle the previous order during the iteration, and it wouldn't finish sorting in such a short time.

The best known sequence of steps is proposed by Sedgewick (1, 5, 19, 41, 109,...), and the terms of the sequence come from these two equations.

This study also shows that "comparison is the dominant operation in Hill sort, not exchange." Hill sort with such stride sequences is faster than insertion sort and heap sort, and even faster than quicksort on small arrays Still faster, but Hill sort is still slower than quicksort when large amounts of data are involved.

Algorithm Stability

As can be seen from the demonstration diagram of the Hill sorting algorithm above, equal data in the Hill sorting may exchange positions, so the Hill sorting is an unstable algorithm.

Comparison of Insertion Sort and Hill Sort

<li>
Insertion sort is stable; Hill sort is unstable.
</li>
<li>
Insertion sort is more suitable for collections where the original records are basically ordered.
</li>
<li>
Hill sort has fewer comparisons and moves than direct insertion sort. When N is larger, the effect is more obvious.
</li>
<li>
In Hill sorting, the method of taking the incremental sequence gap must satisfy: The last step size must be 1.
</li>
<li>
Insertion sort also works for chained storage structures; Hill sort does not work for chained structures.
</li>

sample code

https://github.com/dunwu/algorithm-tutorial/blob/master/codes/algorithm/src/test/java/io/github/dunwu/algorithm/sort/SortStrategyTest.java

The samples include: the case where the number of arrays is odd or even; the case where the elements are repeated or not. And the samples are random samples, the actual measurement is valid. (For more learning about common sorting algorithms, you can reply to "Sort Algorithm Aggregation" on the Java Zhiyin Public Account)

simple selection sort

gist

Simple selection sort is a selection sort.

Selection sort: Select the record with the smallest keyword from the records to be sorted each time, and place it at the end of the sorted record sequence until all sorting ends.

Algorithmic thinking

<li>
From the sequence to be sorted, find the element with the smallest keyword;
</li>
<li>
If the smallest element is not the first element of the sequence to be sorted, swap it with the first element;
</li>
<li>
From the remaining N - 1 elements, find the element with the smallest keyword, and repeat steps 1 and 2 until the sorting ends.
</li>

As shown in the figure, in each sorting, the i-th smallest element of the current is placed at the position i .

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java5-1564366178.jpg" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

Analysis of Algorithms

Performance of Simple Selection Sort Algorithm

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java1-1564366179.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

time complexity

The number of comparisons in simple selection sort is independent of the initial ordering of the sequence. Assuming that the sequence to be sorted has N elements, the number of comparisons is always N (N - 1) /

The number of moves is related to the initial ordering of the sequence. When the sequence is in positive order, the number of moves is the least, which is 0.

When the sequence is reversed, the maximum number of moves is 3N (N - 1) / 2.

Therefore, to sum up the above, the time complexity of simple sorting is O(N2).

space complexity

Simple selection sort requires a temporary space that is used when exchanging values.

sample code

https://github.com/dunwu/algorithm-tutorial/blob/master/codes/algorithm/src/test/java/io/github/dunwu/algorithm/sort/SortStrategyTest.java

The samples include: the case where the number of arrays is odd or even; the case where the elements are repeated or not. And the samples are random samples, the actual measurement is valid.

heap sort

gist

Before introducing heap sorting, we first need to explain what a heap is.

heap is a complete binary tree with sequential storage.

The key of each node is not greater than the key of its child nodes, such a heap is called a small root heap.

The key of each node is not less than the key of its child node, such a heap is called a big root heap.

For example, a sequence of n elements {R0, R1, … , Rn} is called a heap if and only if one of the following relations is satisfied:

<li>
Ri = R2i+1 and Ri = R2i+2 (small root heap)
</li>
<li>
Ri = R2i+1 and Ri = R2i+2 (big root heap)
</li>

where i=1,2,…,n/2 rounded down;

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java6-1564366180.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

As shown in the figure above, the sequence R{3, 8, 15, 31, 25} is a typical small root heap.

There are two parent nodes in the heap, element 3 and element 8.

Element 3 is represented by R[0] in the array, and its left child is R[1] and its right child is R[2].

Element 8 is represented in the array by R[1], its left child is R[3], its right child is R[4], and its parent is R[0]. It can be seen that they satisfy the following rules:

Suppose the current element is represented by R[i] in the array, then,

<li>
Its left child node is: R[2*i+1];
</li>
<li>
Its right child node is: R[2*i+2];
</li>
<li>
Its parent node is: R[(i-1)/2];
</li>
<li>
R[i] = R[2*i+1] and R[i] = R[2i+2].
</li>

Algorithmic thinking

<li>
First, adjust the array R[0..n] to the heap according to the definition of the heap (this process is called creating the initial heap), swap R[0] and R[n];
</li>
<li>
Then, adjust R[0..n-1] to heap, swap R[0] and R[n-1];
</li>
<li>
Repeat this until R[0] and R[1] are swapped.
</li>

The above ideas can be summarized into two operations:

<li>
Construct the initial heap based on the initial array (build a complete binary tree, ensuring that all parent nodes are numerically greater than its child nodes).
</li>
<li>
Each time the first and last elements are swapped, the last element (the maximum value) is output, and then the remaining elements are resized into a large root heap.
</li>

When the last element is output, the array is already arranged in ascending order.

Let's first look at how to build the initial heap through a detailed example diagram.

Given an unordered sequence { 1, 3, 4, 5, 2, 6, 9, 7, 8, 0 }.

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java9-1564366181.jpg" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

With the initial heap constructed, let's look at the full heapsort processing:

Again for the aforementioned unordered sequence { 1, 3, 4, 5, 2, 6, 9, 7, 8, 0 }.

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java0-1564366181.jpg" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

I believe that through the above two pictures, it should be able to demonstrate the operation and processing of heap sorting very intuitively.

core code

public void HeapAdjust(int[] array, int parent, int length) {
    int temp = array[parent]; // temp保存当前父节点
    int child = 2 * parent + 1; // 先获得左孩子

    while (child  length) {
        // 如果有右孩子结点,并且右孩子结点的值大于左孩子结点,则选取右孩子结点
        if (child + 1  length && array[child]  array[child + 1]) {
            child++;
        }

        // 如果父结点的值已经大于孩子结点的值,则直接结束
        if (temp = array[child])
            break;

        // 把孩子结点的值赋给父结点
        array[parent] = array[child];

        // 选取孩子结点的左孩子结点,继续向下筛选
        parent = child;
        child = 2 * child + 1;
    }

    array[parent] = temp;
}

public void heapSort(int[] list) {
    // 循环建立初始堆
    for (int i = list.length / 2; i = 0; i--) {
        HeapAdjust(list, i, list.length);
    }

    // 进行n-1次循环,完成排序
    for (int i = list.length - 1; i  0; i--) {
        // 最后一个元素和第一元素进行交换
        int temp = list[i];
        list[i] = list[0];
        list[0] = temp;

        // 筛选 R[0] 结点,得到i-1个结点的堆
        HeapAdjust(list, 0, i);
        System.out.format("第 %d 趟: t", list.length - i);
        printPart(list, 0, list.length - 1);
    }
}

Analysis of Algorithms

General situation of heap sort algorithm

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java0-1564366182.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

time complexity

The storage representation of the heap is sequential. Because the binary tree corresponding to the heap is a complete binary tree, and the complete binary tree usually adopts the sequential storage method.

When you want to get a partially sorted sequence before the k-th smallest element in a sequence, it is best to use heap sort.

Because the time complexity of heap sort is O(n+klog2n), if k ≤ n/log2n, the time complexity that can be obtained is O(n).

Algorithm Stability

Heap sort is an unstable sorting method.

Because in the process of heap adjustment, the comparison and exchange of keywords is a path from the node to the leaf node, so for the same keyword, the keywords in the back may be exchanged to the front. condition.

sample code

https://github.com/dunwu/algorithm-tutorial/blob/master/codes/algorithm/src/test/java/io/github/dunwu/algorithm/sort/SortStrategyTest.java

The samples include: the case where the number of arrays is odd or even; the case where the elements are repeated or not. And the samples are random samples, the actual measurement is valid.

merge sort

gist

Merge sort is an efficient sorting algorithm based on the merge operation, is a very typical application of the divide and conquer method (Divide and Conquer).

Merge the ordered subsequences to obtain a completely ordered sequence; that is, first make each subsequence ordered, and then make the subsequence segments ordered. If two sorted lists are merged into one sorted list, it is called two-way merge.

Algorithmic thinking

The sequence to be sorted R[0…n-1] is regarded as n ordered sequences of length 1, and adjacent ordered lists are merged in pairs to obtain n/2 ordered lists of length 2; These ordered sequences are merged again to obtain n/4 ordered sequences of length 4; and so on, and finally an ordered sequence of length n is obtained.

To sum up:

Merge sort actually does two things:

<li>
"Decomposition" - Divide the sequence in half each time.
</li>
<li>
"Merge" - Merge the divided sequence segments in pairs and then sort them.
</li>

Let's consider the second step first, how to merge?

In each merge process, two ordered sequence segments are merged and then sorted.

The two ordered sequence segments are R[low, mid] and R[mid+1, high], respectively.

First merge them into a local temporary array R2, and after the merge is complete, copy R2 back into R.

For the convenience of description, we call R[low, mid] the first segment, and R[mid+1, high] as the second segment.

Each time a record is taken from the two segments for keyword comparison, the smaller one is placed in R2. Finally copy the rest of the paragraphs directly into R2.

After such a process, R2 is already an ordered sequence, and then it is copied back into R, and a merge sort is completed.

core code

public void Merge(int[] array, int low, int mid, int high) {
    int i = low; // i是第一段序列的下标
    int j = mid + 1; // j是第二段序列的下标
    int k = 0; // k是临时存放合并序列的下标
    int[] array2 = new int[high - low + 1]; // array2是临时合并序列

    // 扫描第一段和第二段序列,直到有一个扫描结束
    while (i = mid && j = high) {
        // 判断第一段和第二段取出的数哪个更小,将其存入合并序列,并继续向下扫描
        if (array[i] = array[j]) {
            array2[k] = array[i];
            i++;
            k++;
        } else {
            array2[k] = array[j];
            j++;
            k++;
        }
    }

    // 若第一段序列还没扫描完,将其全部复制到合并序列
    while (i = mid) {
        array2[k] = array[i];
        i++;
        k++;
    }

    // 若第二段序列还没扫描完,将其全部复制到合并序列
    while (j = high) {
        array2[k] = array[j];
        j++;
        k++;
    }

    // 将合并序列复制到原始序列中
    for (k = 0, i = low; i = high; i++, k++) {
        array[i] = array2[k];
    }
}

Having mastered the method of merging, let's learn how to decompose.

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java3-1564366183.jpg" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

In a certain merge, let the length of each sub-table be gap, then there are n/gap ordered sub-tables in R[0…n-1] before merging: R[0…gap-1], R[gap …2 gap-1], … , R[(n/gap) gap … n-1].

When calling Merge to merge adjacent subtables, special handling of the special case of the table must be done.

If the number of sub-tables is odd, the last sub-table does not need to be merged with other sub-tables (that is, this round of processing is bye): if the number of sub-tables is even, pay attention to the upper limit of the interval of the latter sub-table in the last pair of sub-tables is n-1.

core code

public void MergePass(int[] array, int gap, int length) {
    int i = 0;

    // 归并gap长度的两个相邻子表
    for (i = 0; i + 2 * gap - 1  length; i = i + 2 * gap) {
        Merge(array, i, i + gap - 1, i + 2 * gap - 1);
    }

    // 余下两个子表,后者长度小于gap
    if (i + gap - 1  length) {
        Merge(array, i, i + gap - 1, length - 1);
    }
}

public int[] sort(int[] list) {
    for (int gap = 1; gap  list.length; gap = 2 * gap) {
        MergePass(list, gap, list.length);
        System.out.print("gap = " + gap + ":t");
        this.printAll(list);
    }
    return list;
}

Analysis of Algorithms

Merge Sort Algorithm Performance

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java1-1564366184.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

time complexity

The form of merge sort is a binary tree, the number of times it needs to traverse is the depth of the binary tree, and according to the complete binary tree, its time complexity is O(n*log2n).

space complexity

As can be seen from the previous algorithm description, during the algorithm processing, a temporary storage space of size n is required to save the merged sequence.

Algorithm Stability

In merge sort, the order of equal elements does not change, so it is a stable algorithm.

Comparison of Merge Sort, Heap Sort, Quick Sort

<li>
Considering the space complexity: heap sort is preferred, followed by quick sort, and finally merge sort.
</li>
<li>
From the perspective of stability, merge sort should be selected, because heap sort and quick sort are both unstable.
</li>
<li>
If you consider the sorting speed in the average case, you should choose quick sort.
</li>

sample code

https://github.com/dunwu/algorithm-tutorial/blob/master/codes/algorithm/src/test/java/io/github/dunwu/algorithm/sort/SortStrategyTest.java

The samples include: the case where the number of arrays is odd or even; the case where the elements are repeated or not. And the samples are random samples, the actual measurement is valid.

radix sort

gist

Radix sort differs from the seven sorting methods explained earlier in this series in that it does not need to compare the size of keywords.

It is based on the value of each bit in the key, by performing several times of "allocation" and "collection" on the sorted N elements to achieve sorting.

Let's take a concrete example to show how radix sort works.

Suppose an initial sequence is: R {50, 123, 543, 187, 49, 30, 0, 2, 11, 100}.

We know that for any Arabic number, the bases of its digits are represented by 0~9.

So we might as well think of 0~9 as 10 buckets.

We first classify the sequence according to the single digit number and divide it into the specified bucket. For example: R[0] = 50, the single digit is 0, and this number is stored in the bucket numbered 0.

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java6-1564366184.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

After classification, we take these numbers out of each bucket in the order from number 0 to number 9.

At this time, the obtained sequence is a sequence with an increasing trend in single digits.

Sort by single digit: {50, 30, 0, 100, 11, 2, 123,543, 187, 49}.

Next, the tens and hundreds digits can also be sorted in this way, and finally the sorted sequence can be obtained.

Analysis of Algorithms

Radix sort performance

<img class="rich_pages" style="" src="https://www.javazhiyin.com/wp-content/uploads/2019/07/java3-1564366185.png" src="https://www.javazhiyin .com/wp-content/themes/mnews/images/post-loading.gif" alt="Can't write the sorting algorithm during the interview? This is enough" title="Can't write the sorting algorithm during the interview? Look here article is enough.">

time complexity

As can be seen from the above, it is assumed that in the radix sort, r is the radix and d is the number of digits. Then the time complexity of radix sort is O(d(n+r)).

We can see that the efficiency of radix sorting has nothing to do with whether the initial sequence is ordered.

space complexity

During radix sort, n+r temporary space is required to "bucket" the radix on any number of digits.

Algorithm Stability

In the radix sorting process, the elements with the same value in the current number of digits are uniformly "bucketed" each time, and there is no need to exchange positions. So radix sort is a stable algorithm.

sample code

https://github.com/dunwu/algorithm-tutorial/blob/master/codes/algorithm/src/test/java/io/github/dunwu/algorithm/sort/SortStrategyTest.java

The samples include: the case where the number of arrays is odd or even; the case where the elements are repeated or not. And the samples are random samples, the actual measurement is valid.

Finally, I would like to share with you a few good github projects in my collection. The content is still good. If you find it helpful, you can give a star by the way.


好好学java
3.4k 声望6.5k 粉丝