ArrayList是平时相当常用的List实现, 其中boolean add(E e) 的实现比较直接:

/**
 * Appends the specified element to the end of this list.
 *
 * @param e element to be appended to this list
 * @return <tt>true</tt> (as specified by {@link Collection#add})
 */
public boolean add(E e) {
    ensureCapacityInternal(size + 1);  // Increments modCount!!
    elementData[size++] = e;
    return true;
}

有时候也使用 void add(int index, E element) 把元素插入到指定的index上. 在JDK中的实现是:

/**
 * Inserts the specified element at the specified position in this
 * list. Shifts the element currently at that position (if any) and
 * any subsequent elements to the right (adds one to their indices).
 *
 * @param index index at which the specified element is to be inserted
 * @param element element to be inserted
 * @throws IndexOutOfBoundsException {@inheritDoc}
 */
public void add(int index, E element) {
    rangeCheckForAdd(index);

    ensureCapacityInternal(size + 1);  // Increments modCount!!
    System.arraycopy(elementData, index, elementData, index + 1,
                     size - index);
    elementData[index] = element;
    size++;
}

略有差别, 需要保证当前elementData 数组容量够用, 然后把从index处一直到尾部的数组元素都向后挪一位. 最后把要插入的元素赋给数组的index处.

一直以来, 我都认为 System.arraycopy 这个native方法, 它的c++实现是调用底层的memcpy, 直接方便, 效率也没问题.

但今天看了openJDK的源码发现并非如此.

以openJDK8u60 为例, 在 objArrayKlass.cpp 中:

void ObjArrayKlass::copy_array(arrayOop s, int src_pos, arrayOop d,
                               int dst_pos, int length, TRAPS) {
  assert(s->is_objArray(), "must be obj array");

  if (!d->is_objArray()) {
    THROW(vmSymbols::java_lang_ArrayStoreException());
  }

  // Check is all offsets and lengths are non negative
  if (src_pos < 0 || dst_pos < 0 || length < 0) {
    THROW(vmSymbols::java_lang_ArrayIndexOutOfBoundsException());
  }
  // Check if the ranges are valid
  if  ( (((unsigned int) length + (unsigned int) src_pos) > (unsigned int) s->length())
     || (((unsigned int) length + (unsigned int) dst_pos) > (unsigned int) d->length()) ) {
    THROW(vmSymbols::java_lang_ArrayIndexOutOfBoundsException());
  }

  // Special case. Boundary cases must be checked first
  // This allows the following call: copy_array(s, s.length(), d.length(), 0).
  // This is correct, since the position is supposed to be an 'in between point', i.e., s.length(),
  // points to the right of the last element.
  if (length==0) {
    return;
  }
  if (UseCompressedOops) {
    narrowOop* const src = objArrayOop(s)->obj_at_addr<narrowOop>(src_pos);
    narrowOop* const dst = objArrayOop(d)->obj_at_addr<narrowOop>(dst_pos);
    do_copy<narrowOop>(s, src, d, dst, length, CHECK);
  } else {
    oop* const src = objArrayOop(s)->obj_at_addr<oop>(src_pos);
    oop* const dst = objArrayOop(d)->obj_at_addr<oop>(dst_pos);
    do_copy<oop> (s, src, d, dst, length, CHECK);
  }
}

可以看到copy_array在做了各种检查之后, 最终copy的部分在do_copy方法中, 而这个方法实现如下:

// Either oop or narrowOop depending on UseCompressedOops.
template <class T> void ObjArrayKlass::do_copy(arrayOop s, T* src,
                               arrayOop d, T* dst, int length, TRAPS) {

  BarrierSet* bs = Universe::heap()->barrier_set();
  // For performance reasons, we assume we are that the write barrier we
  // are using has optimized modes for arrays of references.  At least one
  // of the asserts below will fail if this is not the case.
  assert(bs->has_write_ref_array_opt(), "Barrier set must have ref array opt");
  assert(bs->has_write_ref_array_pre_opt(), "For pre-barrier as well.");

  if (s == d) {
    // since source and destination are equal we do not need conversion checks.
    assert(length > 0, "sanity check");
    bs->write_ref_array_pre(dst, length);
    Copy::conjoint_oops_atomic(src, dst, length);
  } else {
    // We have to make sure all elements conform to the destination array
    Klass* bound = ObjArrayKlass::cast(d->klass())->element_klass();
    Klass* stype = ObjArrayKlass::cast(s->klass())->element_klass();
    if (stype == bound || stype->is_subtype_of(bound)) {
      // elements are guaranteed to be subtypes, so no check necessary
      bs->write_ref_array_pre(dst, length);
      Copy::conjoint_oops_atomic(src, dst, length);
    } else {
      // slow case: need individual subtype checks
      // note: don't use obj_at_put below because it includes a redundant store check
      T* from = src;
      T* end = from + length;
      for (T* p = dst; from < end; from++, p++) {
        // XXX this is going to be slow.
        T element = *from;
        // even slower now
        bool element_is_null = oopDesc::is_null(element);
        oop new_val = element_is_null ? oop(NULL)
                                      : oopDesc::decode_heap_oop_not_null(element);
        if (element_is_null ||
            (new_val->klass())->is_subtype_of(bound)) {
          bs->write_ref_field_pre(p, new_val);
          *p = element;
        } else {
          // We must do a barrier to cover the partial copy.
          const size_t pd = pointer_delta(p, dst, (size_t)heapOopSize);
          // pointer delta is scaled to number of elements (length field in
          // objArrayOop) which we assume is 32 bit.
          assert(pd == (size_t)(int)pd, "length field overflow");
          bs->write_ref_array((HeapWord*)dst, pd);
          THROW(vmSymbols::java_lang_ArrayStoreException());
          return;
        }
      }
    }
  }
  bs->write_ref_array((HeapWord*)dst, length);
}

可以看到, 在设定了heap barrier之后, 元素是在for循环中被一个个挪动的. 做的工作比我想象的要多.

如果有m个元素, 按照给定位置, 使用ArrayList.add(int,E)逐个插入到一个长度为n的ArrayList中, 复杂度应当是O(m*n), 或者O(m*(m+n)), 所以, 如果m和n都不小的话, 效率确实是不高的.

效率高一些的方法是, 建立m+n长度的数组或ArrayList, 在给定位置赋值该m个要插入的元素, 其他位置依次赋值原n长度List的元素. 这样时间复杂度应当是O(m+n).

还有, 在前面的实现中, 我们可以看到有对ensureCapacityInternal(int) 的调用. 这个保证数组容量的实现主要在:

/**
 * Increases the capacity to ensure that it can hold at least the
 * number of elements specified by the minimum capacity argument.
 *
 * @param minCapacity the desired minimum capacity
 */
private void grow(int minCapacity) {
    // overflow-conscious code
    int oldCapacity = elementData.length;
    int newCapacity = oldCapacity + (oldCapacity >> 1);
    if (newCapacity - minCapacity < 0)
        newCapacity = minCapacity;
    if (newCapacity - MAX_ARRAY_SIZE > 0)
        newCapacity = hugeCapacity(minCapacity);
    // minCapacity is usually close to size, so this is a win:
    elementData = Arrays.copyOf(elementData, newCapacity);
}

大家知道由于效率原因, ArrayList容量增长不是正好按照要求的容量minCapacity来设计的, 新容量计算的主要逻辑是: 如果要求容量比当前容量的1.5倍大, 就按照要求容量重新分配空间; 否则按当前容量1.5倍增加. 当然不能超出Integer.MAX_VALUE了. oldCapacity + (oldCapacity >> 1) 实际就是当前容量1.5倍, 等同于(int) (oldCapacity * 1.5), 但因这段不涉及浮点运算只是移位, 显然效率高不少.

所以如果ArrayList一个一个add元素的话, 容量是在不够的时候1.5倍增长的. 关于1.5这个数字, 或许是觉得2倍增长太快了吧. 也或许有实验数据的验证支撑.

关于这段代码中出现的Arrays.copyOf这个方法, 实现的是重新分配一段数组, 把elementData赋值给新分配的空间, 如果新分配的空间大, 则后面赋值null, 如果分配空间比当前数组小则截断. 底层还是调用的System.arraycopy.


EthanSun
45 声望3 粉丝