Article From:https://www.cnblogs.com/gaochaochao/p/9216079.html

 

Taking vector vector as an example to analyze dynamic expansion algorithm design and time complexity analysis

Implementation of expansion algorithm

How to achieve capacity expansion? How much is the new capacity?

For containers with an internal data area of the container, dynamic expansion is necessary because it is not possible to predict the growth of the container size, and it is necessary to ensure that the data area is not only logically distributed in a continuous way, but also to ensure its continuity on the physical address, so it is necessary to ask if it needs to be asked before each insert operation.Do you want to expand?

For example, figure 2.1 (c~e) we need to apply for a larger volume of continuous physical address as a new data area, such as an array B [], and then copy the original array data into the new data area (Figure d), then the new element E can be inserted. Finally, the space site of the original data area must be released and returned.To the operating system.

A feasible algorithm is implemented as follows

  1. void Vector<T>::expand() // dilatation when the vector space is insufficient
  2. {
  3.    if(_size<_capacity) //      return ;
  4.    if(_capacity<DEFAULT_CAPACITY) //” > is not less than the minimum capacity
  5.       _capacity=DEFAULT_CAPACITY;
  6.    T * _oldelem=_elem;
  7.    _elem=new T[_capacity<<1]; // doubling capacity
  8.    for(int i=0;i<_size;i++) //” > copy the original vector content (T is the basic type, or has been overloaded.“=” the custom type of the operator)
  9.       _elem[i]=_oldelem[i];
  10.    delete []oldelem; // release the original space
  11. }

From the above algorithm, we can know that the capacity of the new array is expanded to 2 times the capacity of the original array!

Amortized analysis

 

Time cost

Compared with the conventional array, the extended vector is more flexible and the capacity is not limited by the initial capacity, but it needs to pay a price. In the worst case, in the worst case, each expansion is n~2n, and it takes O (n) time. It seems that the insertion efficiency seems to be pulled down, but this is an illusion.According to the Convention, the capacity of the array will double every time the O (n) time is implemented, which means that at least one more n insert will be needed before it can be removed. That is, as the size of the vector continues to expand, the probability of expansion before the insertion process will be reduced rapidly.In a sense of balance, the time cost of expansion is not very high – the following is the analysis of the time complexity of the allocation.

Apportionment complexity

The expandable vector is sufficient for many consecutive operations, and the time consumed during the period is apportioned to all operations. Such apportionment of the average time cost to each operation is called amortized running time. Attention and average time complexity (averThe difference between age running time: the latter is based on a given probability distribution, weighted average of the required execution time in various cases, and also the expected running time (expected running time). The former requires the participation of the apportionment operationIt must be composed and derived from a real and feasible sequence of operations, and the sequence must be long enough. Comparatively speaking, the apportionment complexity can make more objective and accurate estimation of computation cost and efficiency.

O(1)Apportionment time

 

Examine the operation of continuous n times (query, insert, delete, etc.), add up the time for array expansion in all operations, divided by N, as long as n is large enough, and this event is the time cost for the expansion of the processing.

Assuming that the initial capacity of the array is a constant N, since it is the upper bound of the estimation complexity, the initial size of the initialization vector N—- is about to overflow. It is not difficult to know that other operations except the insertion operation will not lead to overflow. In the worst case, suppose that after this continuous n, this operation is insert, n≫ > N, defining the following functions

  • 1 size(n)=Vector size after continuous insertion of n elements
  • 2 capacity(n)=Vector capacity after continuous insertion of n elements
  • 3 T(n)=Expansion time for continuous insertion of n elements

Vector scale starts with N and progressively increases with the inheritance of operations: size (n) =n+N

Since there is no overflow, the load factor loading factor is not more than 100%, the algorithm expands the inertia strategy and only doubles the capacity only under the actual overflow condition, that is, the loading factor is always less than 50%.

You can get the following relation

Size(n)<=capacity(n)<2*size(n)

Considering N as a constant, there are capacity (n) =O (size (n)) =O (n)

The capacity is increased exponentially at 2 bits. Before the capacity reaches capacity (n), a total of O (logn) expansion has been done. The time required for each expansion is proportional to the capacity at that time, and the same rate increases in proportion to 2. Therefore, the amount of time consumed in the expansion is accumulated.

T(n)=2N+4N+8N+…+capacity(n)<2capacity(n)=O(n)

The apportionment shall be divided into successive operations n times, and the apportionment time required for a single operation should be O (1).

 

Leave a Reply

Your email address will not be published. Required fields are marked *