1、概览
2、数组与集合类的区别
(1)数组的长度是固定,且元素可以为引用也可以是基本的数据类型
(2)集合的长度是可变的(可动态扩展),且元素只能是引用数据类型
3、Collection和Map的区别
(1)Collection一次存一个元素,是单列集合
(2)Map一次存的是一个键值对
4、List和Set的区别
(1)List中元素可重复
(2)Set中元素不能重复
5、Vector
(1)底层实现是数组,因此查询快,增删慢
protected Object[] elementData;
(2)通过在方法前加上synchronized来保证线程安全
public synchronized boolean add(E e) {
modCount++;
ensureCapacityHelper(elementCount + 1);
elementData[elementCount++] = e;
return true;
}
(3)扩容函数:如果没有设置扩容的大小则每次扩为原来的两倍,然后将老数组中的数据移动到新的数组中
private void grow(int minCapacity) {
// overflow-conscious code
int oldCapacity = elementData.length;
int newCapacity = oldCapacity + ((capacityIncrement > 0) ?
capacityIncrement : oldCapacity);
if (newCapacity - minCapacity < 0)
newCapacity = minCapacity;
if (newCapacity - MAX_ARRAY_SIZE > 0)
newCapacity = hugeCapacity(minCapacity);
elementData = Arrays.copyOf(elementData, newCapacity);
}
6、ArrayList
(1)底层实现是基于数组的
transient Object[] elementData; // non-private to simplify nested class access
(2)是线程不安全的
public boolean add(E e) {
ensureCapacityInternal(size + 1); // Increments modCount!!
elementData[size++] = e;
return true;
}
(3)扩容为原来的1.5倍
private void grow(int minCapacity) {
// overflow-conscious code
int oldCapacity = elementData.length;
int newCapacity = oldCapacity + (oldCapacity >> 1);
if (newCapacity - minCapacity < 0)
newCapacity = minCapacity;
if (newCapacity - MAX_ARRAY_SIZE > 0)
newCapacity = hugeCapacity(minCapacity);
// minCapacity is usually close to size, so this is a win:
elementData = Arrays.copyOf(elementData, newCapacity);
}
7、linkedList
(1)底层是基于双向链表实现
transient Nodefirst; transient Node last;
private static class Node{ E item; Node next; Node prev; Node(Node prev, E element, Node next) { this.item = element; this.next = next; this.prev = prev; } }
(2)不是线程安全的,且由于是基于双向链表的实现,所以不存在扩容问题,直接将数据插入链表的尾部
public boolean add(E e) {
linkLast(e);
return true;
}
void linkLast(E e) {
final Node l = last;
final Node newNode = new Node<>(l, e, null);
last = newNode;
if (l == null)
first = newNode;
else
l.next = newNode;
size++;
modCount++;
}
8、CopyOnwriteArrayList
(1)读写规则:读读,读写不互斥,只有写写是互斥的
(2)源码解析:
写操作加锁,读操作不加锁,copyOnwrite的特点:每次写入数据都会重新创建数组,将老的值复制到新数组中。
public boolean add(E e) {
final ReentrantLock lock = this.lock;
//写操作是加锁的
lock.lock();
try {
//原数组
Object[] elements = getArray();
int len = elements.length;
//每增加一条数据,就创建一个新数组,新数组长度为老数组+1,将老数组中的值移到新数组中
//数组中始终不存在空闲的空间
Object[] newElements = Arrays.copyOf(elements, len + 1);
newElements[len] = e;
setArray(newElements);
return true;
} finally {
lock.unlock();
}
}
//不上锁直接通过数组的下标获取到值
public E get(int index) {
return get(getArray(), index);
}
(3)存在的问题:可能读到的是老的数组中的内容,新插入的数据(在新建的数组中)不可见;每次的写入都要创建新的数组存放数据,占用两倍的内存空间,因此CopyOnWriteArrayList适用于读多写少,不要求数据的实时性的场景
(4)CopyOnWriteArrayList的迭代访问的数据为创建迭代器时CopyOnWriteArrayList中的数据,后续修改,迭代器无法感知到。
public class CopyonWriteTest {
public static void main(String[] args) {
CopyOnWriteArrayList copyonWriteArrayList=new CopyOnWriteArrayList<>();
copyOnWriteArrayList.add(0);
copyOnWriteArrayList.add(1);
copyOnWriteArrayList.add(2);
copyOnWriteArrayList.add(3);
Iterator iterator=copyOnWriteArrayList.iterator();
copyOnWriteArrayList.remove(3);
copyOnWriteArrayList.add(4);
while(iterator.hasNext()){
System.out.println(iterator.next());
}
System.out.println("-------------------------");
Iterator iterator2=copyOnWriteArrayList.iterator();
while(iterator2.hasNext()){
System.out.println(iterator2.next());
}
}
}
运行结果:
0 1 2 3 ------------------------- 0 1 2 4 Process finished with exit code 0
9、linkedHashMap
(1)linkedHashMap是继承自HashMapm,且也不是线程安全的,相对于HashMap多了按插入数据顺序(或获取顺序)遍历的功能,即多维护了一个双向链表来记录插入数据(或获取数据)的顺序。
static class Entry extends HashMap.Node {
Entry before, after;
Entry(int hash, K key, V value, Node next) {
super(hash, key, value, next);
}
}
private static final long serialVersionUID = 3801124242820219131L;
transient linkedHashMap.Entry head;
transient linkedHashMap.Entry tail;
NodenewNode(int hash, K key, V value, Node e) { linkedHashMap.Entry p = new linkedHashMap.Entry (hash, key, value, e); linkNodeLast(p); return p; }
linkedHashMap会覆盖HashMap中的newNode方法,因此linkedHashMap中的所有的节点都是包含前去指针和后继指针的。
(2)accessOrder为true表示双向链表按照获取数据的从老到新排序
为false表示按照插入的从老到新顺序,默认是false
final boolean accessOrder;
(3)测试代码:
package collections;
import java.util.linkedHashMap;
import java.util.linkedList;
public class linkedHashMapTest {
public static void main(String[] args) {
linkedHashMap map=new linkedHashMap(16,0.75f,true);
map.put(1,1);
map.put(2,2);
map.put(3,3);
map.put(4,4);
map.get(4);
map.get(3);
map.get(2);
map.get(1);
map.forEach((a,b)->{
System.out.println(a+":"+b);
});
System.out.println("------------------------------------");
linkedHashMap map2=new linkedHashMap();
map2.put(1,1);
map2.put(2,2);
map2.put(3,3);
map2.put(4,4);
map2.get(2);
map2.forEach((a,b)->{
System.out.println(a+":"+b);
});
}
}
运行结果:
4:4 3:3 2:2 1:1 ------------------------------------ 1:1 2:2 3:3 4:4 Process finished with exit code 0
10、TreeMap
(1)TreeMap是基于红黑树实现的,具有排序的功能
(2)区别于linkedHashMap只能根据插入(或查询)的顺序进行遍历,TreeMap能够通过传入Comparator接口的实现来定义排序,不传入时默认是按key的升序。
public class TreeMapTest {
public static void main(String[] args) {
TreeMap treeMap=new TreeMap<>(new Comparator() {
@Override
public int compare(Integer o1, Integer o2) {
if(o1>o2){
return -1;
}else if(o1{
System.out.println(a+":"+b);
});
}
}
运行结果:
4:4 3:3 2:2 1:1 Process finished with exit code 0
11、HashTable
(1)HashTable的底层是数组加链表,且还有以下7点不同
//1、方法上加了锁
public synchronized V put(K key, V value) {
// 2、value 不允许为空
if (value == null) {
throw new NullPointerException();
}
Entry,?> tab[] = table;
//3、key不允许为空
int hash = key.hashCode();
//4、hash不经过扰动函数处理,直接取key的hashCode
//5、数组的长度不一定为2的N次方,因此直接取模,而不是做hashCode&(2的N次方-1)
int index = (hash & 0x7FFFFFFF) % tab.length;
@SuppressWarnings("unchecked")
Entry entry = (Entry)tab[index];
for(; entry != null ; entry = entry.next) {
if ((entry.hash == hash) && entry.key.equals(key)) {
V old = entry.value;
entry.value = value;
return old;
}
}
addEntry(hash, key, value, index);
return null;
}
protected void rehash() {
int oldCapacity = table.length;
Entry,?>[] oldMap = table;
//6、扩容不是扩为原来的2倍,而是2倍加1
int newCapacity = (oldCapacity << 1) + 1;
if (newCapacity - MAX_ARRAY_SIZE > 0) {
if (oldCapacity == MAX_ARRAY_SIZE)
// Keep running with MAX_ARRAY_SIZE buckets
return;
newCapacity = MAX_ARRAY_SIZE;
}
Entry,?>[] newMap = new Entry,?>[newCapacity];
modCount++;
threshold = (int)Math.min(newCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
table = newMap;
for (int i = oldCapacity ; i-- > 0 ;) {
for (Entry old = (Entry)oldMap[i] ; old != null ; ) {
Entry e = old;
old = old.next;
int index = (e.hash & 0x7FFFFFFF) % newCapacity;
e.next = (Entry)newMap[index];
newMap[index] = e;
}
}
}
public Hashtable(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal Capacity: "+
initialCapacity);
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal Load: "+loadFactor);
if (initialCapacity==0)
initialCapacity = 1;
this.loadFactor = loadFactor;
//7、不是懒加载的,默认长度为11
table = new Entry,?>[initialCapacity];
threshold = (int)Math.min(initialCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
}
public Hashtable() {
//7、默认长度为11
this(11, 0.75f);
}
12、ConcurrentHashMap
(1)ConcurrentHashMap通过CAS和synchronized来保证线程的安全,
当要插入的桶为空时,通过CAS来进行赋值,并在外层加上死循环,形成一个乐观锁保证修改的成功。
当插入位置不为空,则获取这个桶上的头节点,将其作为锁对象,通过synchronized来保证线程的安全。
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
int hash = spread(key.hashCode());
int binCount = 0;
//1、循环保证CAS能够运行完成
for (Node[] tab = table;;) {
Node f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
//2、数组对应的位置上为空,直接插入,通过cas对其进行赋值
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
if (casTabAt(tab, i, null,
new Node(hash, key, value, null)))
break; // no lock when adding to empty bin
}
//3、表示正在扩容,正在将这个桶下的数据迁移到新的扩容后的数组上
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
V oldVal = null;
//4、将数组上的头节点作为锁对象,锁住整个链表或红黑树,来进行插入
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node e = f;; ++binCount) {
K ek;
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
if (!onlyIfAbsent)
e.val = value;
break;
}
Node pred = e;
if ((e = e.next) == null) {
pred.next = new Node(hash, key,
value, null);
break;
}
}
}
else if (f instanceof TreeBin) {
Node p;
binCount = 2;
if ((p = ((TreeBin)f).putTreeval(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
}
}
addCount(1L, binCount);
return null;
}
(2)1.8版本ConcurrentHashMap的扩容逻辑
a、nextTable用于表示处于扩容状态下的将使用的新的数组,
在一个桶下的数据被移动到新的数组后,这个节点会被设置为ForwardingNode对象
当调用get方法发现桶上的头节点为Forwarding对象时,会去nextTable上查找数据
当调用put方法发现桶上的头节点为Forwarding对象时,会将当前的线程加入扩容的工作(将数据从老的数组迁移到新数组迁移)
b、通过cas来保证任务的分配的线程安全,数据的迁移是从后向前的,通过记录transferIndex来表示任务分配到了哪,下一个新加入的线程,通过CAS操作将(transferIndex-步长)置回给transferIndex,当CAS操作成功表示当前线程分配任务成功,线程开始做分配数组段的数据的迁移工作。只有当扩容全都完成之后节点才不会为FowardingNode,这之前这个线程会循环的执行扩容的任务。
private final void transfer(Node[] tab, Node [] nextTab) { int n = tab.length, stride; if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE) stride = MIN_TRANSFER_STRIDE; // subdivide range if (nextTab == null) { // initiating try { //1、将数组扩容为原来的两倍 @SuppressWarnings("unchecked") Node [] nt = (Node [])new Node,?>[n << 1]; nextTab = nt; } catch (Throwable ex) { // try to cope with OOME sizeCtl = Integer.MAX_VALUE; return; } //2、让nextTable指向正在扩容的数组 nextTable = nextTab; transferIndex = n; } int nextn = nextTab.length; //3、所有的ForwardingNode对象用于标记表示,当前处于扩容状态,此节点已经被移动到新数组 ForwardingNode fwd = new ForwardingNode (nextTab); boolean advance = true; boolean finishing = false; // to ensure sweep before committing nextTab for (int i = 0, bound = 0;;) { Node f; int fh; while (advance) { int nextIndex, nextBound; if (--i >= bound || finishing) advance = false; else if ((nextIndex = transferIndex) <= 0) { i = -1; advance = false; } //4、通过cas来保证任务的分配的线程安全, //数据的迁移是从后向前的,通过记录transferIndex来表示任务分配到了哪 //下一个新加入的线程,通过CAS操作将(transferIndex-步长)置回给transferIndex //当CAS操作成功表示当前线程分配任务成功 else if (U.compareAndSwapInt (this, TRANSFERINDEX, nextIndex, nextBound = (nextIndex > stride ? nextIndex - stride : 0))) { bound = nextBound; i = nextIndex - 1; advance = false; } } if (i < 0 || i >= n || i + n >= nextn) { int sc; if (finishing) { nextTable = null; table = nextTab; sizeCtl = (n << 1) - (n >>> 1); return; } if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) { if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT) return; finishing = advance = true; i = n; // recheck before commit } } else if ((f = tabAt(tab, i)) == null) advance = casTabAt(tab, i, null, fwd); else if ((fh = f.hash) == MOVED) advance = true; // already processed else { synchronized (f) { if (tabAt(tab, i) == f) { Node ln, hn; if (fh >= 0) { int runBit = fh & n; Node lastRun = f; for (Node p = f.next; p != null; p = p.next) { int b = p.hash & n; if (b != runBit) { runBit = b; lastRun = p; } } if (runBit == 0) { ln = lastRun; hn = null; } else { hn = lastRun; ln = null; } for (Node p = f; p != lastRun; p = p.next) { int ph = p.hash; K pk = p.key; V pv = p.val; if ((ph & n) == 0) ln = new Node (ph, pk, pv, ln); else hn = new Node (ph, pk, pv, hn); } setTabAt(nextTab, i, ln); setTabAt(nextTab, i + n, hn); setTabAt(tab, i, fwd); advance = true; } else if (f instanceof TreeBin) { TreeBin t = (TreeBin )f; TreeNode lo = null, loTail = null; TreeNode hi = null, hiTail = null; int lc = 0, hc = 0; for (Node e = t.first; e != null; e = e.next) { int h = e.hash; TreeNode p = new TreeNode (h, e.key, e.val, null, null); if ((h & n) == 0) { if ((p.prev = loTail) == null) lo = p; else loTail.next = p; loTail = p; ++lc; } else { if ((p.prev = hiTail) == null) hi = p; else hiTail.next = p; hiTail = p; ++hc; } } ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) : (hc != 0) ? new TreeBin (lo) : t; hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) : (lc != 0) ? new TreeBin (hi) : t; setTabAt(nextTab, i, ln); setTabAt(nextTab, i + n, hn); setTabAt(tab, i, fwd); advance = true; } } } } } }
(3)size的实现原理
a、与LongAdder的实现原来相似:
在不存在线程冲突时,将数值加到baseCount中;
在存在线程冲突时,将数字加入到数组对应的位置中(计数的数组上的槽与存放数据的数组的槽一一对应)
计算总和时将baseCount和计数数组中的所有值相加
b、怎么判断是加在baseCount上还是记录到计数数组上?
先通过CAS将数加到baseCount上,如果失败表示存在并发,则将数据直接存放到计数数组对应的位置
c、与LongAdder一样存在数据不准确问题,在数据加到总和后,可能又发生了改变,导致数据不准确。
public int size() {
long n = sumCount();
return ((n < 0L) ? 0 :
(n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
(int)n);
}
final long sumCount() {
CounterCell[] as = counterCells; CounterCell a;
long sum = baseCount;
if (as != null) {
for (int i = 0; i < as.length; ++i) {
if ((a = as[i]) != null)
sum += a.value;
}
}
return sum;
}
(4)1.7版本的ConcurrentHashMap原理:
通过分段锁(Segment,默认16个)来实现线程安全,因此一次性最多只能有16个线程能进行写操作
(5)ConcurrentHashMap的get方法中是不加锁的(HashTable的get方法加锁),因此同一个桶下的数据的读读,读写之间是不阻塞的,只有同一个桶下的写写才会阻塞
(6)避免组合操作的线程不安全
a、ConcurrentHashMap单独的get和put方法都是原子操作都是线程安全的,但是不能保证操作组合后的原子性,可以通过将下列的一系列操作放在synchronized修饰的代码块中,但是这种方法的效率会很低,相当于每次的累加都把整个ConcurrentHashMap锁住了。
package collections;
import java.util.concurrent.ConcurrentHashMap;
public class ConcurrentHashMapTest implements Runnable{
private static ConcurrentHashMap map=new ConcurrentHashMap<>();
public static void main(String[] args) throws InterruptedException {
ConcurrentHashMapTest test=new ConcurrentHashMapTest();
map.put("小明",1);
Thread th1=new Thread(test);
Thread th2=new Thread(test);
th1.start();
th2.start();
th1.join();
th2.join();
System.out.println(map.get("小明"));
}
@Override
public void run() {
for (int i = 0; i <1000 ; i++) {
int s=map.get("小明");
map.put("小明",++s);
}
}
}
运行结果:
1294 Process finished with exit code 0
b、ConcurrentHashMap中存在方法提供组合操作(如:putIfAbsent,replace)来保证原子性,将上面的操作改为如下,通过replace来修改值,利用乐观锁(CAS+自旋)来保证并发的安全。
public class ConcurrentHashMapTest implements Runnable{
private static ConcurrentHashMap map=new ConcurrentHashMap<>();
public static void main(String[] args) throws InterruptedException {
ConcurrentHashMapTest test=new ConcurrentHashMapTest();
map.put("小明",0);
Thread th1=new Thread(test);
Thread th2=new Thread(test);
th1.start();
th2.start();
th1.join();
th2.join();
System.out.println(map.get("小明"));
}
@Override
public void run() {
for (int i = 0; i <1000 ; i++) {
int s=map.get("小明");
int newCount=s+1;
while(!map.replace("小明",s,newCount)){
//CAS失败,重新获取值,做累加1
s=map.get("小明");
newCount=s+1;
}
}
}
}
运行结果:
2000 Process finished with exit code 0
13、HashSet
(1)HashSet是基于HashMap实现的
private transient HashMapmap; // Dummy value to associate with an Object in the backing Map private static final Object PRESENT = new Object(); public HashSet() { map = new HashMap<>(); }
(2)put方法
public boolean add(E e) {
//HashMap中的所有value是同一个final,static对象
return map.put(e, PRESENT)==null;
}
14、TreeSet
(1)底层基于TreeMap实现,可以传入Comparator的实现,来定义排序
public TreeSet(Comparator super E> comparator) {
this(new TreeMap<>(comparator));
}
public TreeSet() {
this(new TreeMap());
}
private transient NavigableMap m;
// Dummy value to associate with an Object in the backing Map
private static final Object PRESENT = new Object();
TreeSet(NavigableMap m) {
this.m = m;
}
(2)代码实例:
public class TreeSetTest {
public static void main(String[] args) {
TreeSet treeSet=new TreeSet<>(new Comparator() {
@Override
public int compare(Integer o1, Integer o2) {
if(o1>o2){
return -1;
}else if(o1{
System.out.println(a);
});
}
}
运行结果:
4 3 2 1 Process finished with exit code 0
15、linkedHashSet
(1)linkedHashSet继承于HashSet,调用父类默认访问权限的构造方法,底层的实现是linkedHashMap。
public class linkedHashSetextends HashSet implements Set , Cloneable, java.io.Serializable {
public linkedHashSet() {
super(16, .75f, true);
}
HashSet(int initialCapacity, float loadFactor, boolean dummy) {
map = new linkedHashMap<>(initialCapacity, loadFactor);
}
16、Collections
(1)可以通过Collections.synchronizedList和Collections.synchronizedMap将一个线程不安全的集合包装成一个线程安全的集合。本质上是实现了一个包装类,在包装类的方法中将原来的方法写在同步代码块中来保证线程的安全,性能和HashTable,Vector差不多
private final Mapm; // Backing Map final Object mutex; SynchronizedMap(Map m) { this.m = Objects.requireNonNull(m); mutex = this; } public V put(K key, V value) { synchronized (mutex) {return m.put(key, value);} }
17、Queue
(1)队列接口Queue继承自Collection接口,与List,Set不同,它不是随机访问的而是遵循先进先出的原则。
add:在队尾添加,如果队列已满抛出异常
offer:在队尾添加,如果队列已满,返回false
remove:从队头移除并返回,如果队列已空,则抛出异常
poll:从队头移除并返回,如果队列已空,则返回空
element:返回队头元素,如果队列为空,则抛出异常
peek:放回队头元素,如果队列为空,则返回空
(2)阻塞队列BlockingQueue,在队列的基础上增加了阻塞方法,当队列空为空时获取(take)会阻塞,队列满时放入(put)会被阻塞。
阻塞队列的一端(队头)给消费者使用,另一端(队尾)给生成者使用
(3)主要的几种阻塞队列的实现:
SynchronousQueue:容量是0,用来直接传递的并发数据结构。
ArrayBlockingQueue:是有界的,底层的数据结构是数组,通过ReentrantLock和两个Condition来实现生产者和消费者之间的通信。
final Object[] items;
int takeIndex;
int putIndex;
int count;
final ReentrantLock lock;
private final Condition notEmpty;
private final Condition notFull;
public void put(E e) throws InterruptedException {
checkNotNull(e);
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
//满了,等待消费者未满信号的通知
while (count == items.length)
notFull.await();
enqueue(e);
} finally {
lock.unlock();
}
}
private void enqueue(E x) {
final Object[] items = this.items;
items[putIndex] = x;
if (++putIndex == items.length)
putIndex = 0;
count++;
//现在队列不为空,唤醒消费者
notEmpty.signal();
}
public E take() throws InterruptedException {
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
//队列为空,等待生成者唤醒
while (count == 0)
notEmpty.await();
return dequeue();
} finally {
lock.unlock();
}
}
private E dequeue() {
final Object[] items = this.items;
@SuppressWarnings("unchecked")
E x = (E) items[takeIndex];
items[takeIndex] = null;
if (++takeIndex == items.length)
takeIndex = 0;
count--;
if (itrs != null)
itrs.elementDequeued();
//不为满,唤醒生产者
notFull.signal();
return x;
}
linkedBlockingQueue:无界队列,底层是通过单向链表实现的,可以通过构造函数传入队列的容量,默认为int的最大值.
与ArrayBlockingQueue不同,内部使用了两把锁,两个Condition分别对应一把锁。使用两把锁是为了提高读写的效率。
public linkedBlockingQueue(int capacity) {
if (capacity <= 0) throw new IllegalArgumentException();
this.capacity = capacity;
last = head = new Node(null);
}
public linkedBlockingQueue() {
this(Integer.MAX_VALUE);
}
private transient Nodelast; private final ReentrantLock takeLock = new ReentrantLock(); private final Condition notEmpty = takeLock.newCondition(); private final ReentrantLock putLock = new ReentrantLock(); private final Condition notFull = putLock.newCondition();
PriorityBlockingQueue:不按照先进先出的原则,支持自定义的优先级。
可以通过传入Comparator的实现来定义阻塞队列的取数据的顺序
实例:
package collections;
import java.util.Comparator;
import java.util.concurrent.*;
public class BlockingQueueTest {
//继承自FutureTask,FutureTask将一个callabe接口包装成Future和Runnable接口
static class PriorityTask extends FutureTask{
private int priority;
public PriorityTask(int priority, Callable callable){
super(callable);
this.priority=priority;
}
}
public static void main(String[] args) {
//创建Runable的优先级队列(不能是PriorityTask,应为线程池参数的要求必须是
//BlockingQueue),并传入比较器,小的先出
PriorityBlockingQueue queue=new PriorityBlockingQueue(10, new Comparator() {
@Override
//调用线程池调用的是execute方法,o1,o2就直接是传入的PriorityTask对象
//如果线程调用的是submit方法,会把PriorityTask对象进行包装,将其变为一个
//FutureTask对象,而正在的PriorityTask对象保存在包赚对象的callable中
public int compare(Runnable o1, Runnable o2) {
try {
PriorityTask t1=(PriorityTask)o1;
PriorityTask t2=(PriorityTask)o2;
if(t1.priorityt2.priority){
return 1;
}
}catch (Exception e){
e.printStackTrace();
}
return 0;
}
});
ThreadPoolExecutor threadPool=new ThreadPoolExecutor(1,1,100,TimeUnit.SECONDS,
queue, Executors.defaultThreadFactory());
threadPool.execute(new PriorityTask(4,()->{
//通过休眠来延长运行时间让任务堆积
Thread.sleep(1000);
System.out.println("当前任务的级别"+4);
return null;
}));
threadPool.execute(new PriorityTask(5,()->{
Thread.sleep(1000);
System.out.println("当前任务的级别"+5);
return null;
}));
threadPool.execute(new PriorityTask(2,()->{
Thread.sleep(1000);
System.out.println("当前任务的级别"+2);
return null;
}));
threadPool.execute(new PriorityTask(1,()->{
Thread.sleep(1000);
System.out.println("当前任务的级别"+1);
return null;
}));
threadPool.execute(new PriorityTask(3,()->{
Thread.sleep(1000);
System.out.println("当前任务的级别"+3);
return null;
}));
threadPool.shutdown();
}
}



