푰’풎 풉풉품, 푰 풂풎 풂 품풓풂풅풖풂풕풆 풔풕풖풅풆풏풕 풇풓풐풎 푵풂풏풋풊풏품, 푪풉풊풏풂.
- 푺풉풄풐풐풍: 푯풐풉풂풊 푼풏풊풗풆풓풔풊풕풚
- 푳풆풂풓풏풊풏품: 푰’풎 풄풖풓풓풆풏풕풍풚 풍풆풂풓풏풊풏품 풅풆풔풊품풏 풑풂풕풕풆풓풏, 푳풆풆풕풄풐풅풆, 풅풊풔풕풓풊풃풖풕풆풅 풔풚풔풕풆풎, 풎풊풅풅풍풆풘풂풓풆 풂풏풅 풔풐 풐풏.
- 푯풐풘 풕풐 풓풆풂풄풉 풎풆:푽푿
- 푴풚 풃풍풐품: 풉풕풕풑풔://풉풉품풚풚풅풔.풃풍풐품.풄풔풅풏.풏풆풕/
- 푷풓풐풇풆풔풔풊풐풏풂풍 풔풌풊풍풍풔:풎풚 풅풓풆풂풎
As we know , there may be eight major sorting algorithm. They are bubble sort, insert sort, selection sort, merge sort, quick sort, heap sort, shell sort, radix sort. In this article, the main idea is conclusion and upgrade.
有八大排序,冒泡,插入,选择,快排,归并,堆排,希尔,基数排序,这里主要是总结和提高 初学者建议挨个搜索进行学习,这里不会很详细讲解算法的步骤
1-2: Bubble Sort Based On Comparsion and Swap.Overview:
Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted. The algorithm, which is a comparison sort, is named for the way smaller or larger elements “bubble” to the top of the list.
冒泡排序,把整个序列分成已经排序部分和待排序部分,基于比较的一种简单排序,比较两个相邻的元素,如果不符合大小顺序(看你是从小到大还是从大到小)就交换直到遍历完整个数组,由于每次会把一个最小或者最大的数字放在位置上,就像冒泡一样,慢慢往上冒,所以叫冒泡算法。
Example:
Demo:
public static void sort(int[] nums) {
int n = nums.length;
for (int i = 0; i < n; i++) {
for (int j = n - 1; j > i; j--) {
if (nums[j] < nums[j - 1]) {
swap(nums, j, j - 1);
}
}
}
}
public static void swap(int[] nums, int a, int b) {
int temp = nums[a];
nums[a] = nums[b];
nums[b] = temp;
}
Complexity Analysis:
- Time Complexity: O(n2)
- Space Complexity: O(1)
Optimization:
If the array is sorted, there is no need to sort again. 如果经过部分调整(甚至没有做调整),数组已经有序了,就没必要继续了。
Demo:
public static void sort(int[] nums) {
int n = nums.length;
for (int i = 0; i < n; i++) {
boolean flag = true;
for (int j = n - 1; j > i; j--) {
if (nums[j] < nums[j - 1]) {
swap(nums, j, j - 1);
flag = false;
}
}
if (flag){
return;
}
}
}
Complexity Analysis:
- Time Complexity: O(n2) the worst if the data is reverse; O(n) the best if the data is sorted within input data; 当数据是逆序的时候,每一个值都需要冒到最前面。当在输入的时候已经是排好的,只需要遍历一次,不用交换就可以了。
- Space Complexity: O(1)
Overview:
The algorithm divides the input list into two parts: a sorted sublist of items which is built up from left to right at the front (left) of the list and a sublist of the remaining unsorted items that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries one element to the right.
算法把list分成两个部分,待排序部分和已经排序的部分,每次从待排序部分中选出最大/最小的那个,放在待排序部分的首部,并且将这个元素加入到已经排序的部分。
Example:
Demo:
public static void sort(int[] nums) {
for (int i = 0; i < nums.length; i++) {
int minIndex = i;
for (int j = i + 1; j < nums.length; j++) {
if (nums[j] < nums[minIndex]) {
minIndex = j;
}
}
swap(nums, minIndex, i);
}
}
public static void swap(int[] nums, int a, int b) {
int temp = nums[a];
nums[a] = nums[b];
nums[b] = temp;
}
Complexity Analysis:
- Time Complexity: O(n2)
- Space Complexity: O(1)
Optimization:
If the first of the unsorted is the smallest, there is no need to swap. 如果待排序列的第一个元素刚好是最小的,没有必要和自己进行交换
Demo:
public static void sort(int[] nums) {
for (int i = 0; i < nums.length; i++) {
int minIndex = i;
for (int j = i + 1; j < nums.length; j++) {
if (nums[j] < nums[minIndex]) {
minIndex = j;
}
}
if (minIndex != i) {
swap(nums, minIndex, i);
}
}
}
public static void swap(int[] nums, int a, int b) {
int temp = nums[a];
nums[a] = nums[b];
nums[b] = temp;
}
Complexity Analysis:
- Time Complexity: O(n2)
- Space Complexity: O(1)
搜了一下还有其他优化的方法,这里就没有继续深究。
Notes:
关于代码,这里都是记录的位置,为什么不直接记录值呢?因为记录值你就无法进行定位,不知道数组中的哪个元素需要进行交换了。
Overview:
Insertion sort iterates, consuming one input element each repetition, and grows a sorted output list. At each iteration, insertion sort removes one element from the input data, finds the location it belongs within the sorted list, and inserts it there. It repeats until no input elements remain. 插入排序,每次消费一个输入元素,在已排序的list中增加一个元素。每次从待排序队列中移掉一个,找到它在已排序的list中的插入位置并且插入,重复直到没有输入元素存在。
Example:
Demo:
public static void sort(int[] nums) {
int n = nums.length;
if (nums.length == 1) {
return;
}
for (int i = 1; i < n; i++) {
int temp = nums[i];
// i是待插入的序列头部,[0,i-1]是已排序的
int j = i - 1;
// while (j >= 0) {
// if (temp < nums[j]) {
// nums[j + 1] = nums[j];
// j--;
// } else {
// break;
// }
// }
// 上面的代码换一种写法,等效的
while (j >= 0 && temp < nums[j]) {
nums[j + 1] = nums[j];
j--;
}
// 因为找到了位置仍会j--,所以最终目标位置是j+1
nums[j + 1] = temp;
}
}
Complexity Analysis:
- Time Complexity: O(n2) the worst if the data is reverse; O(n) the best if the data is sorted within input data; 逆序的时候,每一个值都要插在最前面,顺序的时候只需要插在已排序的尾端就行。
- Space Complexity: O(1)
Optimization:
To find the location the element belongs to within the sorted array, binary search is a good choice. 在已排序的list中找到待插入的位置,当然要用二分查找了。
Demo
public static int findPos(int[] nums, int low, int high, int number) {
while (low < high) {
int mid = low + (high - low) >> 1;
if (nums[mid] > number) {
high = mid - 1;
} else {
low = mid + 1;
}
}
return low;
}
把查找位置的过程换上这个就行,但是找到之后,先移动元素再插入,移动元素是不可缺少的,只不过上面的是边找边移动,这里是找到后一次性移动。
1-4: Quick Sort Based on A Partitioning Routine(divide and conquer) 分治法Overview:
Quicksort is a type of divide and conquer algorithm for sorting an array, based on a partitioning routine; the details of this partitioning can vary somewhat, so that quicksort is really a family of closely related algorithms. Applied to a range of at least two elements, partitioning produces a division into two consecutive non empty sub-ranges, in such a way that no element of the first sub-range is greater than any element of the second sub-range. After applying this partition, quicksort then recursively sorts the sub-ranges, possibly after excluding from them an element at the point of division that is at this point known to be already in its final location. Due to its recursive nature, quicksort (like the partition routine) has to be formulated so as to be callable for a range within a larger array, even if the ultimate goal is to sort a complete array. 总结下来就是两个部分 分区+递归,分区部分的目的就在于使得位于基准的左边分区的数永远都小于右边分区的数。基准的数就停留在最终位置上。
Example
Demo:
package com.company.xx.eightSortAlgorithm;
public class QuickSort {
public int partition(int[] nums, int low, int high) {
int pivot = nums[low];
while (low < high) {
while (low < high && nums[high] >= pivot) { // 从右开始找到第一个比pivot值小的数
high--;
}
nums[low] = nums[high]; // 因为nums[low]的值已经被保存在pivot中了,所以直接放进nums[low]上
while (low < high && nums[low] <= pivot) { // 从左开始找到第一个比pivot值大的数
low++;
}
nums[high] = nums[low]; // 同理
}
nums[low] = pivot; // 最后low == high也就是pivot该放的位置
return low;
}
public void quickSort(int[] nums, int low, int high) {
// 为什么low
Complexity Analysis:
- Time Complexity: O(n2) the worst;O(n*logn) the best; O(n*logn) the average;The partition function is O(n) which is easy to understand. As we know, we can analyze recursion complexity through a tree. The depth of recursion is that of the tree. The best case is a complete binary tree whose depth is logn. The worst is a tree with one child whose depth is n. 递归深度可以用一棵二叉树来定,随着情况的不同,如果每一次都把数组平分,那么出来的就是一棵比较端正的二叉树,深度就是logn,如果比较畸形,就变成了单支树,深度就是n. 至于为什么平均时间复杂度是O(n * logn)呢,这个是有数学推导的,可以去搜索一下
- Space Complexity: O(n) the worst;O(logn) the best; O(logn) the average;
Note:
There are two partition functions. One process starts with both ends at the same time, the other starts with low. The latter is used to complete the quicksort based on the linked list.这里给出了两种分区方法,一个是 两端同时进行的,一个是从一端开始的,主要是为了后面的基于链表的快速排序做准备。
Quicksort based on the linked list:基于链表的快速排序写法。写惯了数组,当有一天面试让你写链表,你就蒙圈喽。
In this part, the head is always marked as the pivot. 在这个部分,链表的头始终被当做是轴值。
-
based on swapping value.
public void sortLinkedList(ListNode head) {
quickSort(head, null);
}
public void quickSort(ListNode head, ListNode tail) {
if (head == tail) return;
ListNode p = partition(head, tail);
quickSort(head, p);
quickSort(p.next, tail);
}
private ListNode partition(ListNode head, ListNode tail) {
ListNode pivot = head;
ListNode cur = head.next;
ListNode p = pivot;
while (cur != tail) {
if (cur.val < pivot.val) {
p = p.next;
swap(cur, p);
}
cur = cur.next;
}
swap(p, pivot);
return p;
}
public void swap(ListNode a, ListNode b) {
int temp = a.val;
a.val = b.val;
b.val = temp;
}
P denotes the last index of the nodes which are less than the pivot.value at the last time; So swap can make the values on the left of the pivot less than it, and the values on the right of the pivot are greater than it. p最后代表最后一个比pivot值小的数字,所以swap就可以让pivot左边的值都小于它,右边的值都大于它。
-
based on small and large.
Algorithm:
Here are two external lists. One is small which is used to store nodes whose value is smaller than the pivot. The other is large which is used to store those whose value is bigger than the pivot. They also have one external head node! After one loop, the small list is small -> pivot ->null. The large list is large- > null. Then recur quicksort within the small.next and the large.next. 有两个额外的链表,small链表存储比pivot小的,large存储比pivot大的,它们都是带头结点的单链表。一次loop之后,small 链表的形式是small -> pivot ->null,large是large- > null 然后分别在small.next and the large.next中继续递归操作。
Example:
这是一次循环之后的情况,注意把pivot链接到small上,继续递归操作时候, 不会影响pivot的位置,你可以继续递归,画图就知道了
Code:
public ListNode quickSort(ListNode head) {
if (head == null || head.next == null) {
return head;
}
ListNode p_head = head.next;
ListNode small = new ListNode(0);
ListNode hSmall = small;
ListNode large = new ListNode(0);
ListNode hLarge = large;
while (p_head != null) {
if (p_head.val < head.val) {
small.next = p_head;
small = small.next;
} else {
large.next = p_head;
large = large.next;
}
p_head = p_head.next;
}
large.next = null;
small.next = head;
head.next = null;
small = quickSort(hSmall.next);
large = quickSort(hLarge.next);
head.next = large;
return small;
}
-
quick sort in-place 原地快排,这是看到最好的版本了。
Algorithm:
In each recursion, if the value is smaller than the head(pivot), insert it before head pointer. If the value is bigger than the head(pivot), insert it afer the tail pointer.
Example:
Code:
public ListNode quickSort2(ListNode head, ListNode tail) {
if (head == tail || head.next == tail) {
return head;
}
ListNode p = head.next;
ListNode lHead = head;
ListNode rTail = head;
while (p != tail) {
ListNode next = p.next;
if (p.val < head.val) {
p.next = lHead;
lHead = p;
} else {
rTail.next = p;
rTail = p;
}
p = next;
}
// 去掉rtail后面多余的部分
rTail.next = tail;
ListNode res = quickSort2(lHead, head);
head.next = quickSort2(head.next, tail);
return res;
}
1-5: Merge Sort (divide and conquer) 分治法
Overview:
Conceptually, a merge sort works as follows:
- Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted). 划分成只包含一个元素的sublist
- Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.将sublist进行merge,直到最后变成一个大的list
Example
merge [4,5,7,8]和[1,2,3,6]两个已经有序的子序列
Demo:
首先要先学会一个代码作为前提,就是如何把a[] 和b[]两个有序的数组合并到c[]中
public int[] merge(int[] a, int[] b) {
int[] temp = new int[a.length + b.length];
int index_temp = 0;
int indexA = 0;
int indexB = 0;
while (indexA < a.length && indexB < b.length) {
if (a[indexA] < b[indexB]) {
temp[index_temp++] = a[indexA++];
} else {
temp[index_temp++] = b[indexB++];
}
}
while (indexA < a.length) {
temp[index_temp++] = a[indexA++];
}
while (indexB < b.length) {
temp[index_temp++] = b[indexB++];
}
return temp;
}
接着就可以上归并代码了
public void merge2(int[] nums, int low, int mid, int high) {
// 需要merge的两个部分一个是[low, mid] 一个是 [mid+1, high]
int[] temp = new int[nums.length];
int indexA = low, indexB = mid + 1, indexTemp = low;
while (indexA <= mid && indexB <= high) {
if (nums[indexA] > nums[indexB]) {
temp[indexTemp++] = nums[indexB++];
} else {
temp[indexTemp++] = nums[indexA++];
}
}
while (indexA <= mid) {
temp[indexTemp++] = nums[indexA++];
}
while (indexB <= high) {
temp[indexTemp++] = nums[indexB++];
}
// 将排序之后的数组部分进行回填到nums中,那为什么不在nums上直接操作呢?因为直接在nums上,之前的数字会丢失的
for (int i = low; i < indexTemp; i++) {
nums[i] = temp[i];
}
}
public void mergeSort(int[] nums, int low, int high) {
// 同样low >=high的时候,要么是一个元素要么是空的,这个时候不需要继续执行了。
if (low < high) {
int mid = low + ((high - low) >> 1);
mergeSort(nums, low, mid);
mergeSort(nums, mid + 1, high);
merge2(nums, low, mid, high);
}
}
Notes:
The quicksort algorithm sorts the array before recursing in sublists. However, the merge sort algorithm is in reverse. 需要提一下的是,快排是先调整顺序,再在分子区间内递归进行调整;而归并是先分区,再从最小的分区合并的时候开始调整顺序。
Complexity Analysis:
- Time Complexity: O(n*logn) the worst;O(n*logn) the best; O(n*logn) the average;
- Space Complexity:O(n+logn) = O(n); O(n) the worst;O(n) the best; O(n) the average;
mergesort based on the linked list:
ListNode findMid(ListNode head) {
ListNode fast = head.next, slow = head;
while (fast != null && fast.next != null) {
slow = slow.next;
fast = fast.next.next;
}
return slow;
}
public ListNode merge(ListNode l1, ListNode l2) {
ListNode head = new ListNode(-1);
ListNode temp = head;
while (l1 != null && l2 != null) {
if (l1.val > l2.val) {
temp.next = l2;
l2 = l2.next;
} else {
temp.next = l1;
l1 = l1.next;
}
temp = temp.next;
}
if (l1 != null) {
temp.next = l1;
}
if (l2 != null) {
temp.next = l2;
}
return head.next;
}
public ListNode sortList(ListNode head) {
if (head == null) {
return null;
}
if (head.next == null) {
return head;
}
// head= [head->...mid->null] head2 = [mid.next ->...->null]
ListNode mid = findMid(head);
ListNode head2 = mid.next;
// disconnect the head and head2
mid.next = null;
ListNode l1 = sortList(head);
ListNode l2 = sortList(head2);
return merge(l1, l2);
}
1-6: Heap Sort - A Comparison-based Sorting Algorithm.
Overview:
In computer science, heapsort is a comparison-based sorting algorithm. Heapsort can be thought of as an improved selection sort: like selection sort, heapsort divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element from it and inserting it into the sorted region. Unlike selection sort, heapsort does not waste time with a linear-time scan of the unsorted region; rather, heap sort maintains the unsorted region in a heap data structure to more quickly find the largest element in each step. 堆排序和选择排序思想上差不多,都是从待排序的数组中选择一个最大的/最小的加入已经排序的数组中的,选择排序查找元素的过程是一个线性的O(n),而堆排序是O(logn) 因为直接在完全二叉树中进行搜索。
Tough the heap is a common data structure. In this article, I will record it in detail for my unskilfulness.由于堆部分的知识我属于稍微陌生些的,所以这里从什么是堆开始记录。
Definition of max-heap:
In computer science, a heap is a specialized tree-based data structure which is essentially an almost complete[1] tree that satisfies the heap property: in a max heap, for any given node C, if P is a parent node of C, then the key (the value) of P is greater than or equal to the key of C. In a min heap, the key of P is less than or equal to the key of C.[2] The node at the “top” of the heap (with no parents) is called the root node.堆就是一颗完全二叉树,大根堆满足父节点永远比孩子节点的值大。
Common operations:
To realize the heap data structure in array, I use length to represents the scale of the array. (i.e nums[0, length] is valid).为了在数组中实现堆,我用length来作为数组的有效范围:[0, length]
-
adjust from up to down.从上向下调整。核心思路是,选中孩子节点中比较大的和自己交换,在交换完之后,继续向孩子节点判断交换,直到孩子节点比自己小,或者到叶子节点为止。
Code 1
int length; // [0, length] 数组的有效范围,注意length是索引 有效长度 和nums.length不一样
int[] nums;
public void adjustDown(int i, int length) {
int father = nums[i]; // 先把被调整的节点单独抓出来
for (int leftChild = 2 * i + 1; leftChild <= length; leftChild = 2 * i + 1) {
int rightChild = leftChild + 1;
int swappedChild = leftChild;
// choose the bigger from left and right
if (rightChild <= length && nums[rightChild] > nums[leftChild]) {
swappedChild = rightChild;
}
// if bigger than father,这里没有进行交换操作,但是和交换意义是一样的,好处是省掉了操作,但是增加了代码阅读,举个例子,或者调试下代码就知道它在做什么了
// 这里的father始终指向的是最初被调整的节点,如果之前做了交换,这里把father写成num[i]就是错误的,还是那句话,调试一下就知道了。
if (nums[swappedChild] > father) {
nums[i] = nums[swappedChild]; // 将孩子节点的值填入父节点值
i = swappedChild; // 把孩子节点作为下一次被调整的对象
} else {
break;
}
}
nums[i] = father; // 将被调整的节点放入最终位置上
}
Code 2
public void adjustDown2(int i, int length) {
for (int leftChild = 2 * i + 1; leftChild <= length; leftChild = 2 * i + 1) {
int rightChild = leftChild + 1;
int swappedChild = leftChild;
// choose the bigger from left and right
if (rightChild < length && nums[rightChild] > nums[leftChild]) {
swappedChild = rightChild;
}
if (nums[swappedChild] > nums[i]) {
swap(nums, swappedChild, i); // 交换孩子节点和父节点
i = swappedChild; // 把孩子节点作为下一次被调整的对象
} else {
break;
}
}
}
This gives two kinds of realizations. The first which is better than the second as the first use assignment instead of swap. 这里是两种写法,第二个是基于交换写法,第一种写法优于第二种,具体的看代码注释。
Notes:
- 这里的length,之所以要用,是因为在排序的时候,调整节点的索引范围是逐渐缩小的,这一点一定要注意!同时也为后面的添加和删除元素做帮助。
- adjust from down to up 从下向上进行调整
public void adjustUp(int i) {
int temp = nums[i];
// i 是需要被调整的节点,先计算出父节点的索引,当父节点索引>=0 并且 记住,被调整的节点是堆顶的时候不用调整了,不加会无限循环
// 思路和从上下下调整差不多.
for (int father = (i - 1) / 2; father >= 0 && i != 0; father = (i - 1) / 2) {
if (nums[father] > temp) {
nums[i] = nums[father];
i = father;
} else {
break;
}
}
nums[i] = temp;
}
- build 构建堆
ajdust from the non-leaf node to the top 从第一个非叶节点向下调整public void build() {
// create the heap
for (int i = length / 2 - 1; i >= 0; i--) {
adjustDown(i, length);
}
}
- insertValue 插入值 ;The insert operation follows behind the build operation. Next operation is adjust from down to up. 插入值是在构建操作完成之后的,要从下向上调整。
public void insertValue(int value) {
length++;
nums[length] = value;
adjustUp(length);
}
- delete 删除后从堆底取一个值替代,之后向下进行调整
public void delete(int index) {
if (index > length) {
return;
}
if (index == length) {
length--;
} else {
nums[index] = nums[length];
length--;
adjustDown(index, length);
}
}
sort 堆排序
Each iteration, swap the top of the heap with the bottom, then adjust from up to down. 每次从堆顶取一个值后和堆底进行交换。
public void sort() {
// swap the top and the bottom to sort
for (int i = length; i >= 0; i--) {
swap(nums, 0, i);
adjustDown(0, i - 1);
}
}
注意,这里就用到了我上面提到的,i一直是-1的,说明调整范围也在减小。
Complexity Analysis:
- Time Complexity: O(n*logn) the worst;O(n*logn) the best; O(n*logn) the average;
- Space Complexity:O(1) the worst;O(1) the best; O(1) the average;
1-7: Shell sort
Overview:
Shellsort is an optimization of insertion sort that allows the exchange of items that are far apart. The idea is to arrange the list of elements so that, starting anywhere, taking every hth element produces a sorted list. Such a list is said to be h-sorted. It can also be thought of as h interleaved lists, each individually sorted.[6] Beginning with large values of h allows elements to move long distances in the original list, reducing large amounts of disorder quickly, and leaving less work for smaller h-sort steps to do.[7] If the list is then k-sorted for some smaller integer k, then the list remains h-sorted. Following this idea for a decreasing sequence of h values ending in 1 is guaranteed to leave a sorted list in the end.[6] 希尔排序就是插入排序的一种优化,因为希尔排序可以相隔很远进行值的交换,每次选取第h个元素,使得这h个元素是有序的,也就是说,每h个间隔的数圈出来,他们构成有序的list,随着h间隔的减小,就会慢慢使得整个list是有序的。
Exapmle:
Code:
public static void main(String[] args) {
int[] arr = new int[]{10, 6, 3, 8, 33, 27, 66, 9, 7, 88};
shellSort(arr);
System.out.println(Arrays.toString(arr));
}
private static void shellSort(int[] arr) {
//控制增量,增量为1的时候为最后一趟,增量的初值是可以选的
for (int i = arr.length / 2; i > 0; i /= 2) {
//从i开始的好处就是,把i作为待插入的数字,因为每个组里面的第一个数字自己就是有序的。所以不用从0开始。
// j++ 而不是j+=i,是因为加入每间隔x个是一组,这里不是一组一组插入排序,而是所有的组同时进行插入排序。
for (int j = i; j < arr.length; j++) {
//k是已排序好的最后一个数字,j是待插入的数字
int k = j - i;
int temp = arr[j];
while (k >= 0 && temp < arr[k]) {
//这里用的移动,而不是交换做的
arr[k+i] = arr[k];
k -= i;
}
arr[k + i] = temp;
}
}
}
Complexity Analysis:
- Time Complexity: O(n2) the worst (i.e.all values are reverse)所有的数都是逆序的;O(n) the best(i.e. all values are sorted ); O(n1.3) the average 看的资料= =,我也不知道;
- Space Complexity:O(1) the worst;O(1) the best; O(1) the average;
1-8: RadixSort 基数排序
Overview:
In computer science, radix sort is a non-comparative sorting algorithm. It avoids comparison by creating and distributing elements into buckets according to their radix. For elements with more than one significant digit, this bucketing process is repeated for each digit, while preserving the ordering of the prior step, until all digits have been considered. For this reason, radix sort has also been called bucket sort and digital sort.口述下基数排序,就是0~9 分别是10个桶,首先找出所有数字中位数最多的,有几位就要进行几轮,分别从个位十位百位…进行放/收的过程,放就是按照数组的顺序,对数按照当前位进行入桶操作,相同的数字放进同一个桶中;收就是按照桶的顺序,按照插入时候的顺序进行回填进原数组中去。
Example:
Code:
public class RadixSort {
public void sort(int[] nums) {
int max = Arrays.stream(nums).max().orElse(0);
int n = 1;
int[][] bucket = new int[10][nums.length];
while (n < max) {
int[] counts = new int[10]; // 记录每个桶里面有多少个数字,也方便用来进行遍历
// 放进桶里面
for (int num : nums) {
int index = (num / n) % 10; // 哪一个桶
bucket[index][counts[index]] = num;
counts[index]++;
}
int k = 0;
// 从桶里面取出重新组成数组,i用来遍历桶,j用来遍历桶里面的每一个数字
for (int i = 0; i < 10; i++) {
if (counts[i] != 0) {
for (int j = 0; j < counts[i]; j++) {
nums[k++] = bucket[i][j];
}
}
// // 将计数器置为0,以便下一次的排序
// counts[i] = 0;
}
System.out.println(Arrays.toString(nums));
n *= 10;
}
}
public static void main(String[] args) {
int[] nums = {10, 6, 3, 8, 33, 27, 66, 9, 7, 88};
new RadixSort().sort(nums);
}
}
Complexity Analysis:
- Time Complexity: O(d*2n ) = O(d*n ) d: the number of digits of the largest number in the array , n: the length of the array; 数组中最大数字的位数,n代表数字的个数
- Space Complexity:O(n+d)
1-9: Conclusion
稳定性要拎出来说:不改变原数组的相对位置的,就是稳定的。
堆排序,快速排序,归并排序三者比较(均是在array中):
Based on time and space complexity, their time complexity is O(nlogn) in common cases. However, it is hard to say. In one case, quicksort is better. On the contrary, in another case, heap sort performs better. So if you say which is better ignoring the situation, the result is not so good. 根据时间和空间复杂度,他们平均情况下,能达到O(nlogn)。但是有待商榷,在一种情况下,可能快排更好,在另一种情况下,堆排序会更好。所以说不能抛开情况,去谈某个算法的好坏。
Why quicksort is quicker than heap sort in most cases (ignore space complexity)为什么快排在大多数情况下优于堆排序:
There are two main reasons to analyse it.
-
Locality of reference(局部性原理)
In computer science, locality of reference, also known as the principle of locality,[1] is the tendency of a processor to access the same set of memory locations repetitively over a short period of time.[2] There are two basic types of reference locality – temporal and spatial locality. Temporal locality refers to the reuse of specific data and/or resources within a relatively small time duration. Spatial locality (also termed data locality[3]) refers to the use of data elements within relatively close storage locations. Sequential locality, a special case of spatial locality, occurs when data elements are arranged and accessed linearly, such as traversing the elements in a one-dimensional array.局部性原理是处理器在一个较短的时间内重复性访问同一块内存空间的这样的一个趋势。有两种局部性原理,时间和空间,时间局部性指的是,短时间内二次利用某个特性的数据。空间性原理指的是再一块接近的存储空间中使用元素。连续的局部性作为一种空间局部性的特殊情况发生在数据以线性单位进行排列的时候,比如说访问一维空间的数组中的数据。
Locality is a type of predictable behavior that occurs in computer systems. Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, prefetching for memory and advanced branch predictors at the pipelining stage of a processor core. 局部性在计算机中是一种可预测的行为,系统会利用cache计数来进行局部性优化。
In an unsorted array, quicksort performs better than heap sort.The processor access the array of the scale from low to high within quick sort.However, thinking the heap sort, we know the adjust process from the top to the bottom within the array. So within an ajdjust process, the scale of access is big. Also, the space locality performs bad.在一个未经排序的数组中,快排在大多数情况下是比堆排序好的,在快排中处理器访问数组的范围被限定在low , high中。但是堆排序就不是,堆排序有特定的从上到下调整的过程,那么在一次调整过程中,数组的访问跨度就非常大了,局部性原理就不能很好的使用,那么cpu的处理速度就会慢下来了。
-
swap times?交换的次数问题
When collect information on the internet, I find the second ansewer based on some cases. However, the answer is related to the input data. So I think the answer is not so accurate.
网上还有第二个答案说,堆排序在交换的时候比快排要多,看他们都是拿具体例子举的,也就是说和输入的数据有关系,所以我觉得这个原因是不可靠的。
Why quicksort is quicker than merge sort in most cases:
The main reason is that merge sort uses an external array to finish merging. In each iteration, the merge process uses an array named temp to merge arr[low, mid] and arra[mid+1, high]. So the CPU which opens up a piece of memory and collects it needs more time to finish it.堆排序的merge过程是需要一个额外空间去做的,所以quick sort in-place merge sort out-place就来自于这里,额外空间的开辟和回收,是需要时间的,主要的时间就浪费在这里。
Performance within the linked list:
However, take the linked list into consideration. Within a linked list, mergesort needs no external space and its time complexity is always O(N log N) the same as it performs within an array. On the other hand, quicksort won’t be O(N log N) as the partition function won’t divide the list into two linked lists of equal length.
但是!!如果是链表的话,就不一定了,从上面的代码可以看出,归并排序不需要额外的空间了,并且时间复杂度是很稳定的,可以均匀把链表分成两个部分,所以不像快排不一定均衡拆分链表,所以归并排序会表现得更好。
Reference
- 归并排序的链表形式
- 为什么都用快排而不用归并排序?
- 排序链表(快排方式)
- 八大排序算法详解(动图演示 思路分析 实例代码java 复杂度分析 适用场景)
- 常见的八大排序算法的比较和选择依据
- 链表的中间结点
- 贴一个快速排序的代码,面试点
- 图解归并排序



