Abstract : The complexity analysis is mainly time complexity and space complexity.
This article is shared from the HUAWEI CLOUD community " the Complexity in Plain Language", author: Long Ge's Notes.
Complexity Analysis
I just said that, in my opinion, complexity analysis is the most important knowledge point in data structures and algorithms , of course, learning this is just to find the door, otherwise, if you can't learn it, you will never find the fire door .
Why is complexity analysis so important?
This starts with the Big Bang, er, from the data structures and algorithms themselves.
When I usually dream during the day, I always think of being a salted fish. It’s better to be able to make a lot of money by shitting with pay. Although data structures and algorithms do not have such a noble dream as mine, their appearance is always the same as mine. Want to increase efficiency in less time and less storage.
Where can I start? Irons, the consumption time of CPU and RAM, the bandwidth time of communication, the number of instructions, there are so many, I won’t learn if I don’t, it’s fine, we can summarize a set of models in In theory, the corresponding standards are obtained for different algorithms, and the complexity will not come. You can summarize the function of the input data amount n, right?
Find out why and how
Then how to measure " less time and less storage ", the complexity analysis is born.
make an inappropriate analogy
If data structures and algorithms are regarded as martial arts moves, then complexity analysis is the corresponding mental method.
If you only learn the usage of data structures and algorithms, but don’t learn complexity analysis, it will cost you a lot of hard work to dig out the invincible treasure of the village forest in Lao Wang’s defense village under the right-hand floor tile of Lao Wang’s second bedroom next door. Wangbaquan, but the goose found that there are only moves in the secret book, but you haven't learned the hidden secrets and formulas, so that you can become Wangbaquan well. Only by learning the spirit of tyranny can the tiger's body be shaken, and the spirit of tyranny can shake away the pug raised by Li Yang, a bachelor at the entrance of the village.
Iron Juice: Wow, it’s amazing, it’s amazing, you’re right about being fat, but there’s still no need to learn it.
Xiaoxi:???
Iron Juice: Now many websites and packages, just run the code, you can easily know how much time and how much memory is occupied. Isn't it easy to compare the efficiency of the algorithm?
Xiaoxi: . . . .
Two sheep spit out, eat grapes without spitting grape skins!
The mainstream you are talking about is called post-mortem analysis .
To put it simply, you to write the algorithm code and preference test data in advance, and then run it on the computer, and judge the level of the algorithm's by the final running time. The running time here is our daily time.
I'm not going to argue with you that "just in case you've worked so hard to write good algorithmic code itself is badly written", hindsight is inherently flawed, and it's not a useful metric for us:
First of all, post hoc statistical methods rely too much on the computer's software and hardware . The operation speed of the code on the core i7 processor is faster than that of the core i5 processor, not to mention the software aspects such as different operating systems, different programming languages, etc., even on the same computer, the previous conditions are satisfied, the memory at that time. Or CPU usage can also cause runtime differences.
For example, consider sorting the national census data $n=10^9$, using bubble sort $(10^9)^2$
For an average computer (1GHz $10^9$ flops), it takes about $10^9$ seconds (30 years).
For the Tianhe-1 supercomputer (petaflop = 1P, $10^15$ flops), it takes about $10^3$ seconds (20 minutes).
Furthermore, the post hoc method relies too much on the size of the test dataset . The same is the sorting algorithm. You can sort the whole 5 and 10 numbers at random. Even the most rubbish sorting will look like a rocket very quickly. The same is 10w 100w numbers, the time spent in order and out of order are not equal.
The question is, how many test data sets are appropriate? How can the order of the data be ordered?
Can't tell?
It can be seen that we need a metric that can estimate the efficiency of the algorithm without relying on external forces such as performance and scale, and judge the pros and cons of the algorithm, and the complexity analysis is born to do this. In order to analyze it yourself, you must understand the
time complexity
Algorithm running time
Different solutions to a problem. The shorter the running time, the higher the efficiency of the algorithm, on the contrary, the longer the running time, the lower the efficiency of the algorithm.
So how to estimate algorithm complexity?
Everyone retreats, and the big O we know so well is here!
The big guys got rid of the last hair on their and found that when the running time is used to describe the speed of an algorithm, the total number of steps executed in the algorithm is particularly .
Because this is only an estimate, we assume that the running time of each line of code is Btime (one bit time), then the total running time of the algorithm = the total number of lines of code that is run.
Let's look at a simple piece of code below.
code 1
//python
def longgege_sum (m);
sum = 0;
for longgege in range(m);
sum += longgege
return sum;
Under the assumptions above, what is the total running time of this code that finds the cumulative sum?
The second line of code takes 1 Btime to run, and lines 4 and 5 run m times each, so each takes m Btime to run, so the total running time is (1 + 2m) Btime.
If we use to use the S
function to represent the total running time of the assignment statement , so the above time can be expressed as S(m)=(1 + 2m)*Btime, the human language is that the "data set size" is m, the total The execution time of an algorithm with a number of steps (1+2m) is S(m)".
The above formula shows that S(m) is proportional to the total number of steps. This law is very important and tells an easy-to-understand trend. There is a trend between data scale and running time!
Maybe you still have no idea about the size of the data
By analogy
For our current home computer, if we want to solve the problem within 1s:
O($n^2$) algorithm can handle about $10^4$ level of data
O(n) algorithms can handle data on the order of $10^8$
O(nlogn) algorithms can handle data on the order of $10^7$
Big O notation
What is Big O
When a lot of irons talk about time complexity, you know O(n), O($n^2$), but you can't tell what big O is
The explanation given in Introduction to Algorithms: Big O is used to represent the upper bound, and the upper bound means the worst case or longest running time of the algorithm for any data input.
In terms of insertion sort, we say that its time complexity is O(n^2). If the data is originally ordered, the time complexity is O(n), that is, for all input cases, the worst is the time complexity of O(n^2), so the time complexity of insertion sort is called O(n^2).
I just want to tell you that the time complexity of the same algorithm is not static, it is still related to the form of the input data, the data use cases are different, and the time complexity is also different, this must be remembered . There are also interviewers who usually discuss the implementation and performance of an algorithm with us, which usually refers to the time complexity in theoretical situations.
: I don’t read much, so don’t lie to me, you are still a bit abstract, and do you apply to all algorithms, not necessarily
Xiaoxi: You are very energetic, you know the first n items and how to count them,
Tie Zi: This is what our junior high school teacher said, the difference can be calculated
Xiaoxi: Just look at the following, this code
lift chestnuts
//计算1+2+3+....+n的和
int sum=0
for(int i=1; i<=n; i++){
sum+=i
}
You can see that the loop is n times, so the time complexity is O(n), that is, the time complexity is the number of times the program calculates.
If we modify our own modified run times function, we only keep the highest order terms
If the highest order exists and is not 1, removing the constant multiplied by this term is the following expression:
2n^2+3n+1 ->n^2
So the time complexity is $n^2$
You may wonder why these values are removed, please see the figure below
When the amount of calculation increases with the number of times, the difference between n and 1 is not too large, and the $n^2$ curve becomes larger and larger, so this is also 2$n^2$+3n+1 -> $n^2$ will finally be estimated as $n^2$, because 3n+1 is basically negligible as the number of calculations increases.
Tiezi: That's it, you can make it a little more complicated.
arrange
//利用sums方法求三部分和
//java
public static int sums(int n) {
int num1 = 0;
for(int longege1 = 1; longege1 <= n; longege1++){
num1=num1+longege1;
}
int num2 = 0;
for(int longege2 = 1; longege2 <= n; longege2++){
for(int i = 0; i <= n; i++) {
num2=num2+longege2;
}
}
int num3 = 0;
for(int longege3 = 1; longege3 <= n; longege3++){
for(int i = 0; i <= n; i++) {
for(int j = 0; j <= n; j++) {
num3=num3+longege3;
}
}
}
return num1 + num2 +num3;
}
The above paragraph is the sum of three parts. It should be easy to know after the previous study. The time complexity of the first part is O(n), the time complexity of the second part is O($n^2$), and the third part is O($n^2$) part is O($n^3$).
Normally, S(n)=O(n)+O($n^2$)+O($n^3$) of this code, according to the "dominant" part, obviously the first two brothers are both Pass directly off, and finally S(n)=O($n^2$).
Through these examples, smart irons will surely find that for the time complexity analysis of , it is enough to find out the part of the code that "leads". This leading is the highest complexity, that is, the execution The magnitude of the part n with the highest number of times is .
The rest is to practice more, practice more consciously and think more, then you can be as stable as me.
All right
Let me introduce several kinds of fathers, no, several levels~
constant order O(1)
function test($n){
echo $n;
echo $n;
echo $n;
}
There is no loop, no matter what $n is, it only runs 3 times, then the time complexity is O(3), which is taken as O(1)
Linear order O(n)
for($i=1;$i<=$n;$i++){
$sum+=$i
}
Familiar flat (cubic) square order: o($n^2$)/o($n^3$)
$sum=0;
for($i=1;$i<=$n;$i++){
for($j=1;$j<$n;$j++){
$sum+=$j
}
}
Two loops, the inner loop is executed n times, and the outer loop is also executed n times, so the time complexity is O(n^2), and the cubic order is the same
Special square order: O($n^2$/2+n/2)->O($n^2$)
for(){
for(){
..... ----------->n^2
}
}
+
for(){
------------> n
}
+
echo $a+$b --------------> 1
So the overall number of calculations is n^2+n+1, and we calculate the time complexity as O(n^2)
logarithmic order: O(log2n)
int longege = 1
while(longege < m) {
longege = longege * 2;
}
Or according to what we said before, we first find the "dominant", where the leader is the last line, as long as the time complexity of it is calculated, the time complexity of this period will be known.
This passage of human words is, how many times 2 will be >= m?
Assuming that y is required, it is equivalent to asking:
<center>$2^y=m$</center>
which is
<center>y=log2m</center>
So the time complexity of the above code should be O(log 2 m).
But for this logarithmic complexity, whether you use base 2, 3, or base 20, it can be written as (logn)
This starts with the base conversion formula for logarithms.
In addition to the dataset size affecting the running time of the algorithm, "the specifics of the data" also affect the running time.
Let's look at this piece of code:
public static int find_word(int[] arr, String word) {
int flag = -1;
for(int i = 0; i <= arr.length; i++) {
if(arr[i] == word) {
flag = i;
}
break;
}
return flag;
}
The above simple code is to ask the character variable word to appear in the array arr. I use this paragraph to explain what "the specific situation of the data" means.
The variable word may appear anywhere in the array arr, assuming a=['a', 'b', 'c', 'd']:
- When word = 'a', it is exactly the first one in the list, and the following ones do not need to be traversed, then the time complexity in this case is O(1).
- When word = 'd' or word='e', in both cases, the entire list is traversed, then the time complexity in these cases is O(n).
According to different situations, we have three concepts of best case time complexity, worst case time complexity and average case time complexity.
Let's look at a piece of code, and we will analyze its time complexity from the best and worst cases.
// n 表示数组 array 的长度
int find (int[] array, int n, int x) {
int i = 0;
int pos = -1;
for (int i=0; i < n; i++) {
if (array[i] == x) {
pos = i;
}
}
return pos;
}
What this code does is, in an unordered array, find where the variable x occurs. Returns -1 if not found. General analysis: the core code is executed n times, so its time complexity is O(n), where n represents the length of the array.
The code is simply optimized: you can end the loop early when you find an element that meets the conditions in the middle.
// n 表示数组 array 的长度
int find (int[] array, int n, int x) {
int i = 0;
int pos = -1;
for (; i < n; i++) {
if (array[i] == x) {
pos = i;
break;
}
}
return pos;
}
The optimized code cannot simply say that its time complexity is O(n). Because, the traversal may end the loop in the element pointed to by any of the array subscripts 0 ~ n-1, isn't it magic?
Continue to analyze below:
- Assuming that the first element in the array is exactly the variable x to be searched, it is obvious that the time complexity is O(1).
- Assuming that the variable x does not exist in the array, then we need to traverse the entire array, and the time complexity becomes O(n).
So, in different cases, the time complexity of this code is different.
wait for my kick
hmm~
one more question
I believe you can understand these concepts literally. The best time complexity is, in the most ideal case, the time complexity of executing this code. Corresponds to hypothesis 1. The time complexity is O(1).
In the same way, the worst time complexity is, in the worst case, the time complexity of executing this code. Corresponds to Hypothesis 2. The time complexity is O(n).
Average time complexity
First of all, let's talk about it, also called "weighted average time complexity", why is it called weighting? Isn't it strange, because, usually, to calculate the average time complexity, we need to take the probability into account, that is, when we calculate the average time complexity, we need a "weighted value" to really calculate the average time complexity.
We code the entire example to analyze the average time complexity:
// n 表示数组 array 的长度
int find(int[] array, int n, int x) {
int i = 0;
int pos = -1;
for (; i < n; ++i) {
if (array[i] == x) {
pos = I;
break;
}
}
return pos;
}
The code is very simple, it means that in an array, to find the number x, the best complexity is O(1), and the worst is O(n).
How is the average complexity calculated?
Let's talk about the simple average calculation formula first:
In the above code, all the times to find x are added: 1 + 2 + 3 +... + n + n (this n represents the number of times to traverse the array when x does not exist), in addition, the number of times to search is n + 1 times, then the result is:
Substituting into the big O expression, the result is O(n).
What this formula says is: Calculate the sum of all possible situations and divide by the number of possible situations. To put it bluntly, this is an absolutely average result. Indicates that each result may occur n + 1 times.
This is a crude assumption.
What if a simple probability is used a little bit?
There are 2 probabilities here:
The probability of whether the x variable is in the array, there are 2 cases - yes and no, so his probability is 1/2.
The probability that the x variable appears in the array, there are n cases, but it will only appear once, so it is 1/n.
We multiply the two probabilities and the result is 1/(2n). This number is the "weighted value".
How can the "complexity" of the above code be calculated using the weighted values?
Then, we replace (n + 1) with "weights" on this formula: that is, 1/2n.
The result is 3n + 1 / 4 . This is also the "weighted average time complexity", which means that the "weighted average" of 1 + 2 + ... + n + n times is performed.
If you use Big O notation, removing coefficients, constants, and lower orders, then his final result is O(n).
It can be seen that the former uses a measure that does not have any weight in the denominator, just a simple n + 1, while the latter, we do a simple weight calculation, and think that the probability of occurrence is not n + 1, but 1/2n.
It can be said that the weighted value, , is based on the former, and the purpose is to be more accurate. That is to say, to calculate the accurate average time complexity, it is necessary to accurately calculate the "weight value", and the weight value will be affected by the data range and data . Therefore, it is necessary to adjust the parameters in actual operation.
In short, take the value of "the probability of whether the x variable is in the array", not necessarily 1/2, if there is such a set of data {y, s, f, f, g, x, g, h }, Then, is his probability still 1/2, in fact, it is only 1/8, so it still has to be based on the actual situation.
Amortized time complexity
Ah, this sounds familiar to the average time complexity. However, the application scenarios of the amortized time complexity are more special and limited than the average time complexity. Here's a piece of code:
//array 表示一个长度为 n 的数组
//代码中的 array.length 就等于 n
static int[] array = new int[]{1, 2, 3, 4, 5};
static int count = 2;
public static void insert(int val) {
// 数组没有空闲空间的情况
if (count == array.length) {
int sum = 0;
for (int i = 0; i < array.length; i++) {
sum = sum + array[i];
}
array[0] = sum;
count = 1;
System.out.println("array.length:::" + array.length + "sum:" + sum);
}
// 数组有空闲空间的情况
array[count] = val;
count++;
System.out.println("count!=array.length:" + array.length + ",,,count::" + count);
for (int i = 0; i < array.length; i++) {
System.out.println("array[" + i + "] = " + array[i]);
}
}
The above implements the function of inserting data into an array. When the array is full, that is, when count == array.length in the code, we use a for loop to traverse the array to sum, and empty the array, put the sum value after the sum in the first position of the array, and then Insert new data. But if the array has free space at the beginning, insert the array directly into the array.
Now analyze the time complexity of this code. In the ideal case, there is free space in the array, and we only need to insert the data into the position with the subscript count, so the best case time complexity is O( 1). In the worst case, there is no free space in the array, we need to do a traversal and summation of the array first, and then insert the data, so the worst case time complexity is O(n).
Then we analyze the average time complexity according to the above method. Assuming that the length of the array is n, we can divide it into n cases according to where the data is inserted, and the time complexity of each case is O(1). In addition, there is another: "extra" case, that is, inserting a data when there is no free space in the array, and the time complexity at this time is O(n). Moreover, the probability of occurrence of these n + 1 situations is the same, both 1/(n + 1).
In fact, the average complexity analysis in this example does not need to be so complicated and does not require knowledge of probability. Let's first compare this insert() example with the above find() example. The difference between the two is as follows:
- Difference one:
First, the complexity of the find() function is O(1) in extreme cases. But insert() has a time complexity of O(1) in most cases. Only in individual cases, the complexity is relatively high, for O(n). This is the first difference between the two.
- Difference two:
For the insert() function, the insertion frequency of O(1) time complexity and the insertion of O(n) time complexity are very regular, and there is a certain timing relationship before and after, generally an O (n) After the insertion, it is followed by n - 1 O(1) insertion operations, and the cycle goes back and forth.
Therefore, for the complexity analysis of such a special scenario, does not need to find out all the input conditions and the corresponding probability of occurrence as in the previous average complexity analysis method, and then calculate the weighted average, which is unnecessary. .
For this special scenario, we introduce a simpler analysis method: amortized analysis. The time complexity obtained by amortized analysis is called amortized time complexity.
So how to use the amortized analysis method to analyze the amortized time complexity of the algorithm?
Continue to see this insert() example. Each O(n) insertion operation will be followed by n - 1 O(1) insertion operations, so the more time-consuming operation is evenly amortized to the next n - 1 less time-consuming operations. Down, the amortized time complexity of this group of consecutive operations is O(1). This is the general idea of amortization analysis, right?
In the continuous operation of a data structure, the time complexity is very low in most cases, only in some cases the time complexity is relatively high, and there is a coherent timing relationship between these operations, at this time we can put this A group of operations are put together and analyzed to see if the time-consuming of the operation with higher time complexity can be amortized to other operations with lower time complexity. Moreover, where amortized time complexity analysis can be applied, the general amortized time complexity is approximately equal to the best-case time complexity.
In short, Amortization is the limit of average time complexity
Amortized time complexity may not be well understood, especially the difference from average time complexity. However, we can understand it as a special kind of average time complexity. The most important thing is to master its analysis method, this kind of thinking.
The reason why these concepts of complexity are introduced is that in the same piece of code, the complexity level may be different under different inputs. After introducing these concepts, we can more comprehensively express the execution efficiency of a piece of code.
Usually, everyone cares more about the best time complexity. That's right. Is it right to improve time efficiency?
but
I feel more concerned with the worst case than the best case for the following reasons:
(1) In the worst case, it can give the upper bound of the execution time of the algorithm, so that it can be assured that no matter what input is given, the execution time of the algorithm will not exceed this upper bound, so that there is a bottom line for comparison and analysis.
(2) The worst case is a pessimistic estimate, but for many problems, the time complexity of the average case and the worst case is similar. For example, in the example of insertion sort, the time complexity of the average case and the worst case is the input length n. the quadratic function.
space complexity
There is not much you need to know about space complexity compared to time complexity.
The same is true, it is also a trend described, but this trend is the memory space occupied by the temporary variable during the running process of the code.
Um? temporary?
It starts with the execution of the code in the computer.
The storage occupation of the code running in the computer is mainly divided into 3 parts
- the code itself
- input data
- temporary variable
The first two have to occupy space by themselves, and have nothing to do with the performance of the code, so in the end, to measure the space complexity of the code, only the memory space temporarily occupied during the running process is concerned.
How to calculate?
<center>R(n) = O(f(n))</center>
The space complexity is denoted as R(n), and the representation is consistent with the time complexity S(n).
under analysis
Let's use a simple code to understand the space complexity.
//python
def longege_ListSum(n):
let = []
for i in range(n):
let.append(i)
return let
There are obviously two temporary variables let and i in the above paragraph
let is to build an empty list, the memory occupied by this list will increase with the increase of the for loop, and it will end up to n, all, the space complexity of let is O(n), i is the constant order of the element position , and the size n is irrelevant, so the space complexity of this code is O(n).
Let's look at it again
O($n^2$)
//java
public static void longege_ListSum(n) {
int[] arr1 = new int[0];
for(int i=0; i<=n; i++) {
int arr2 = new int[0];
for(int j=0; j<=n; j++) {
arr1.apend(arr2);
}
}
return arr1;
}
It's still the same analysis method. Obviously, the above code manually creates a two-dimensional array arr1, which occupies $n^2$ in one dimension, so it is the space complexity of this code.
how much is that dichotomy
int select(int a[], int k, int len)
{
int left = 0;
int right = len - 1;
while (left <= right)
{
int mid = left + ((right - left) >> 2);
if (a[mid] == k)
{
return 1;
}
else if (a[mid] > k)
{
right = mid - 1;
}
else
{
left = mid + 1;
}
}
return NULL;
}
The space allocated by k, len, and a[] in the code does not change with the amount of processed data, so its space complexity S(n) = O(1)
by the way
In the worst case, it is found after looping x times, n / ($2^x$)=1; x=log2n; its time complexity is log2n.
In the process of finding the space complexity of Fibonacci numbers, we need to consider the process of the function stack frame. For example, when we find the fifth Fibonacci number, we need to open up space to store the fourth number first. Then open up space to store the third number; when the space is opened up to the second and first numbers, the third number gets the result and returns to the fourth number, and the value of the fourth number is known and returned To the fifth number, in the process, the maximum occupied space of is the number of layers minus one .
The size of the open space is at most equal to the number of layers + 1, that is to say, to find the Nth Fibonacci number, the space complexity is O(N).
Time complexity of recursive algorithm
1) One recursive call, if only one recursive call is made in the recursive function, and the recursive depth is depth, then the time complexity of each recursive function is T, and the overall time complexity is O(T * depth) .
For example, the recursion depth of the binary search method is logn (each time it is divided into half), then the complexity is O(Tlogn)
To sum up, the formula is, recursion time complexity = total number of recursion * number of recursion each time, space complexity = recursion depth (that is, the height of the tree)
2) Two recursive calls (focus on the number of recursive calls)
Let's look at the following example
int f(int n) {
assert(n >= 0);
if(n == 0)
return 1;
else
return f(n-1) + f(n-1);
}
How can we know the number of recursion, we can draw a recursive tree, and then count the nodes of the tree
As you can see, this is an exponential algorithm, very slow! But it makes a lot of sense in our search field.
But our advanced sorting algorithms such as merge, quicksort are also 2 recursive calls, but the time complexity is only O(nlogn) level, that is because
1) The depth above is n, and the depth of advanced sorting is only logn
2) In the advanced sorting algorithm, the size of the data we process at each node is component reduction.
Optimization of the Fibonacci sequence
斐波那契数列循环算法:
//时间复杂度:O(n)
//空间复杂度:O(1)
long long Fib(long N)
{
long long first = 1;
long long second = 1;
long long ret = 0;
int i = 3;
for (; i <= N; ++i)
{
ret = first + second;
first = second;
second = ret;
}
return second;
}
int main()
{
printf("%u\n",Fib(50));
system("pause");
return 0;
}
//斐波那契数列递归算法:
//时间复杂度: O(2^n)
//空间复杂度:O(n)
long long Fib(long long N)
{
return (N < 3) ? 1 : Fib(N - 1) + Fib(N - 2);
}
int main()
{
printf("%u\n",Fib(1,1,50));
system("pause");
return 0;
}
Notice:
The time complexity of the Fibonacci sequence is the number of binary trees;
The time complexity of the Fibonacci sequence is the number of times the function call stack is the depth of the binary tree.
//斐波那契尾递归算法:(优化)
//时间复杂度:O(n)
//空间复杂度:O(n)
long long Fib(long long first,long long second,int N)
{
if (N < 3 )
{
return 1;
}
if (N == 3)
{
return first + second;
}
return Fib(second,first+second,N-1);
}
Use the matrix power algorithm to optimize the Fibonacci sequence algorithm again.
static int Fibonacci(int n)
{
if (n <= 1)
return n;
int[,] f = { { 1, 1 }, { 1, 0 } };
Power(f, n - 1);
return f[0, 0];
}
static void Power(int[,] f, int n)
{
if (n <= 1)
return;
int[,] m = { { 1, 1 }, { 1, 0 } };
Power(f, n / 2);
Multiply(f, f);
if (n % 2 != 0)
Multiply(f, m);
}
static void Multiply(int[,] f, int[,] m)
{
int x = f[0, 0] * m[0, 0] + f[0, 1] * m[1, 0];
int y = f[0, 0] * m[0, 1] + f[0, 1] * m[1, 1];
int z = f[1, 0] * m[0, 0] + f[1, 1] * m[1, 0];
int w = f[1, 0] * m[0, 1] + f[1, 1] * m[1, 1];
f[0, 0] = x;
f[0, 1] = y;
f[1, 0] = z;
f[1, 1] = w;
}
The algorithm complexity after optimization is O(log2n).
please take it down
In the process of algorithm brushing, we will encounter various time complexities, but even if your code changes, you can hardly escape the following common time complexities.
The time complexity in the above table increases from top to bottom, so O(1) is the most efficient, O(n) and O($n^2$) have been mentioned before, the last one in the above table is efficient It's ridiculously low. If you encounter me unfortunately in the future, I will mention it, and I won't go into details.
Maybe some friends are still confused. Now that the performance of computer hardware is getting stronger and stronger, why do we still pay so much attention to time complexity?
It's a good question, we can compare it with a few chestnuts and see how big the gap is between them.
There are 100 people standing in front of you, and you find her you like at a glance.
If you replace it with 10,000, the result doesn't make any difference.
This is O(1) There are 100 people standing in front of you, you have to look over them one by one to find your goddess. No matter what order you look at, I always have a way of putting your goddess on the one you see last. If you switch to 10,000, you have 100 times the workload.
this is O(n)
If there are 100 people standing in front of you now, and you are blind, you have to observe in pairs and pairs to find the only pair of twins. Ditto, I can always put twins in the last pair you find. You will have to observe at most 4950 pairs in total.
If you replace it with 10,000, that's 49,995,000 pairs, or 10,100 times the workload
this is O(n²)
There are 128 men standing in front of you, and you have to sort them by height - let's say a merge solution - you first divide them into two 64-man squadrons, each into two 32-man squadrons, each The squadrons are divided...until at the end there is only one person left in each "little...squad". Obviously, a person must have been sorted.
Then do the reverse, merging the teams you just split. When merging teams, since they have already been sorted, it is only necessary to take out the top of the two teams for comparison, and find the shortest one and take it out. If this continues, the double-sized team will also be in order. Obviously, this merge operation is O(n). The total number of comparisons will not exceed the total number of people on both teams.
The final question is just how many "split-merge" operations there are. The number of splits is halved each time, obviously 7. From this, roughly 7×128 operations need to be performed.
this is O(nlogn)
Different time complexity, the gap is obvious, well, here is what you need to keep in mind
The time complexity of the data structure
The time complexity of the sorting algorithm
What does algorithm stability mean? If the order of the first two equal data in the sequence is the same as the order of their two positions after the sorting, we say that the algorithm is stable, what does it mean? If the sorting algorithm is stable, the results and key fields of the first subsort can be used for the second .
Finally, the relationship between algorithm and performance, to measure the quality of an algorithm, mainly evaluates time and space by the amount of data, which will directly affect the program performance in the end. Generally, the space utilization rate is small, and the time required is relatively long. Therefore, it is often heard in performance optimization strategies that space is exchanged for time, and time is exchanged for space.
At this point, the complexity analysis is all finished. As long as you read this article carefully, I believe you will have a basic understanding of the complexity analysis. The complexity analysis itself is not difficult. Remember to consciously estimate your own code when you encounter problems when writing code, and you will feel more and more familiar with it.
Click to follow and learn about Huawei Cloud's fresh technologies for the first time~
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。