理论基础

举例:1000 个线程同时对一个计数器(cnt)进行++操作,由于 cnt++ 不是原子操作,会导致最终结果小于预期的 1000。

1
2
3
4
5
6
7
8
9
10
11
public class ThreadUnsafeExample {
private int cnt = 0;

public void add() {
cnt++;
}

public int get() {
return cnt;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Example {
public static void main(String[] args) throws InterruptedException {
final int threadSize = 1000;
ThreadUnsafeExample example = new ThreadUnsafeExample();
final CountDownLatch countDownLatch = new CountDownLatch(threadSize);
ExecutorService executorService = Executors.newCachedThreadPool();
for (int i = 0; i < threadSize; i++) {
executorService.execute(() -> {
example.add();
countDownLatch.countDown();
});
}
countDownLatch.await();
executorService.shutdown();
System.out.println(example.get());
}
}

ExecutorService:用于管理线程池。

CountDownLatch:用于等待所有线程执行完成。

线程安全问题,展示了共享变量在并发访问时的数据竞争问题。

Q:Java 是怎么解决并发问题的?
  1. 核心知识点:Java 内存模型规范了 JVM 如何提供按需禁用缓存和编译优化的方法
    1. volatilesynchronizedfinal 三个关键字
    2. Happens-Before 规则
  2. 可见性有序性原子性

Happens-Before 规则

线程基础

线程状态转换:

  • 新建(New):线程被创建但尚未启动的状态。
  • 运行(Runnable):线程已启动,等待 CPU 调度。
  • 运行中(Running):线程获得 CPU 时间片正在执行。
  • 阻塞(Blocking):线程在等待锁以进入同步代码块。
  • 等待(Waiting):线程进入无限期等待状态。
  • 计时等待(Timed Waiting):线程进入有限期等待状态。
  • 死亡(Terminated):线程执行完毕。

线程使用方式:

  • 实现 Runnable 接口
  • 实现 Callable 接口
  • 继承 Thread

乐观锁 vs 悲观锁

悲观锁(Pessimistic Locking):假设并发冲突一定会发生,因此在操作共享资源时先加锁,确保独占访问。

  • 直接使用同步机制(如 synchronized 关键字或 ReentrantLock)。
1
2
3
4
5
6
7
public class PessimisticLock {
private int count = 0;

public synchronized void increment() {
count++;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import java.util.concurrent.locks.ReentrantLock;

public class ReentrantLock {
private final ReentrantLock lock = new ReentrantLock();
private int count = 0;

public void increment() {
lock.lock();
try {
count++;
} finally {
lock.unlock();
}
}
}

乐观锁(Optimistic Locking):假设并发冲突很少发生,允许无锁访问共享资源,仅在提交修改时检查是否冲突。

  • CAS(Compare and Swap):通过原子类(如 AtomicInteger)实现。
  • 版本号机制:类似数据库乐观锁,使用版本号或时间戳。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import java.util.concurrent.atomic.AtomicInteger;

public class OptimisticLock {
private AtomicInteger value = new AtomicInteger(0);

public void increment() {
int oldValue;
int newValue;
do {
oldValue = value.get();
newValue = oldValue + 1;
} while (!value.compareAndSet(oldValue, newValue));
}
}
  • 悲观锁:适合写多读少、业务逻辑复杂、需要严格保证数据一致性的场景。
  • 乐观锁:适合读多写少、追求高吞吐量、能容忍重试或短暂不一致的场景。

自旋锁 vs 适应性自旋锁

自旋锁:当线程获取锁失败时,不会立即阻塞或睡眠,而是继续在 CPU 上循环尝试获取锁。基于CAS 操作实现,适用于锁持有时间短的场景。

  • 基础自旋锁(SimpleSpinLock):最简单的自旋锁实现,使用 AtomicReference 保存当前持有锁的线程,可能造成 CPU 资源浪费。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import java.util.concurrent.atomic.AtomicReference;

public class SimpleSpinLock {
private AtomicReference<Thread> owner = new AtomicReference<>();

public void lock() {
Thread currentThread = Thread.currentThread();
while (!owner.compareAndSet(null, currentThread)) {
// spin waits
}
}

public void unlock() {
Thread currentThread = Thread.currentThread();
owner.compareAndSet(currentThread, null);
}
}
  • 带有让出 CPU 的自旋锁(YieldingSpinLock):增加了Thread.yield()调用,在自旋一定次数后让出 CPU 时间片,相比基础实现更节省 CPU 资源。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
import java.util.concurrent.atomic.AtomicReference;

public class YieldingSpinLock {
private AtomicReference<Thread> owner = new AtomicReference<>();

public void lock() {
Thread current = Thread.currentThread();
int spins = 0;
while (!owner.compareAndSet(null, current)) {
spins++;
if (spins > 30) {
Thread.yield(); // gives up cpu time slices
}
}
}

public void unlock() {
Thread current = Thread.currentThread();
owner.compareAndSet(current, null);
}
}

适应性自旋锁(Adaptive Spinning):根据历史自旋情况动态调整策略,包含自旋次数统计和调整机制,适合竞争程度变化的场景。

  • 如果上次自旋成功,会增加自旋次数
  • 如果上次自旋失败,会减少自旋次数
  • 自动适应不同的场景和负载
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
import java.util.concurrent.atomic.AtomicReference;

public class AdaptiveSpinLock {
private AtomicReference<Thread> owner = new AtomicReference<>();
private volatile int spinCount = 0; // record the number of historical spins
private static final int MAX_SPIN = 50; // maximum number of spins
private static final int MIN_SPIN = 0; // minimum number of spins

public void lock() {
Thread current = Thread.currentThread();
int localSpinCount = spinCount;
if (!owner.compareAndSet(null, current)) {
int spins = 0;
do {
spins++;
if (spins >= localSpinCount) {
if (spins > MAX_SPIN) {
try {
Thread.sleep(1); // spin too long and sleep directly
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
} else {
Thread.yield(); // yield the cpu appropriately
}
}
} while (!owner.compareAndSet(null, current));

// updated historical spin counts
adjustSpinCount(spins);
}
}

private void adjustSpinCount(int actualSpins) {
// adjust the strategy for the next spin based on the number of spins this time
if (actualSpins < spinCount) {
// the number of spins is less, indicating that the competition is not fierce, and the number of preset spins should be reduced appropriately
spinCount = Math.max(MIN_SPIN, spinCount - 1);
} else if (actualSpins > MAX_SPIN) {
// Too many spins indicates fierce competition, increase the number of preset spins
spinCount = Math.min(MAX_SPIN, spinCount + 1);
}
}

public void unlock() {
Thread current = Thread.currentThread();
owner.compareAndSet(current, null);
}
}

带超时和退避的自适应自旋锁(TimeoutAdaptiveSpinLock):增加了超时机制,使用退避算法(backoff)减少竞争,适合需要超时控制的场景。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;

public class TimeoutAdaptiveSpinLock {
private AtomicReference<Thread> owner = new AtomicReference<>();
private volatile int spinCount = 10;
private static final int MAX_SPIN = 100;
private static final int BACKOFF_MIN = 1;
private static final int BACKOFF_MAX = 100;

public boolean tryLock(long timeout, TimeUnit unit) {
long deadline = System.nanoTime() + unit.toNanos(timeout);
Thread current = Thread.currentThread();
int backoff = BACKOFF_MIN; // initial backoff time
int spins = 0;

while (System.nanoTime() < deadline) {
if (owner.compareAndSet(null, current)) {
if (spins > 0) {
// update spin statistics
adjustSpinCount(spins);
}
return true;
}
spins++;
if (spins > spinCount) {
// If the preset number of spins is exceeded, the back-off algorithm is used
int currentBackoff = backoff;
backoff = Math.min(backoff << 1, BACKOFF_MAX);
try {
Thread.sleep(0, currentBackoff);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return false;
}
}
}
return false;
}

private void adjustSpinCount(int actualSpins) {
// the number of spins is updated using the exponential moving average
spinCount = (spinCount * 7 + actualSpins) / 8;
if (spinCount > MAX_SPIN) spinCount = MAX_SPIN;
}

public void unlock() {
Thread current = Thread.currentThread();
owner.compareAndSet(current, null);
}
}

可重入自适应自旋锁(ReentrantAdaptiveSpinLock):支持同一线程多次获取锁,维护重入计数,适合递归调用场景。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import java.util.concurrent.atomic.AtomicReference;

public class ReentrantAdaptiveSpinLock {
private AtomicReference<Thread> owner = new AtomicReference<>();
private volatile int spinCount = 10;
private int holdCount = 0;

public void lock() {
Thread current = Thread.currentThread();
// check for reentrant
if (current == owner.get()) {
holdCount++;
return;
}
// in the case of non-reentrant, a spin acquisition lock is required
int spins = 0;
while (!owner.compareAndSet(null, current)) {
spins++;
if (spins > spinCount) {
Thread.yield();
if (spins > spinCount * 2) {
try {
Thread.sleep(1);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
}
holdCount = 1;
adjustSpinCount(spins);
}

private void adjustSpinCount(int actualSpins) {
// simple adaptive strategy
if (actualSpins < spinCount) {
spinCount = Math.max(1, spinCount - 1);
} else {
spinCount = Math.min(100, spinCount + 1);
}
}

public void unlock() {
Thread current = Thread.currentThread();
if (current == owner.get()) {
holdCount--;
if (holdCount == 0) {
owner.set(null);
}
}
}
}

JVM 中的自旋锁优化:

  • -XX:+UseSpinning:启用自旋锁
  • -XX:PreBlockSpin:自旋次数的默认值
  • 自适应策略:
    • 分析锁的持有时间
    • 分析等待线程数
    • 根据 CPU 核数调整策略

公平锁 vs 非公平锁

公平锁(Fair Lock):按照请求顺序获取锁,维护等待队列,保证 FIFO 原则。

非公平锁(Unfair Lock):抢占式获取锁,不保证获取顺序,允许插队。

$\text {Locks.java}$:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import java.util.concurrent.locks.ReentrantLock;

public class Locks {
private final ReentrantLock fairLock = new ReentrantLock(true);
private final ReentrantLock unfairLock = new ReentrantLock(false);

private int fairCounter = 0;
private int unfairCounter = 0;

public void fairIncrement() {
fairLock.lock();
try {
fairCounter++;
Thread.sleep(1);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
fairLock.unlock();
}
}

public void unfairIncrement() {
unfairLock.lock();
try {
unfairCounter++;
Thread.sleep(1);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
unfairLock.unlock();
}
}

public int getFairCount() {
return fairCounter;
}

public int getUnfairCount() {
return unfairCounter;
}
}

$\text {MonitoredLock.java}$:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.locks.ReentrantLock;

public class MonitoredLock {
private final ReentrantLock fairLock = new ReentrantLock(true);
private final ReentrantLock unfairLock = new ReentrantLock(false);

private final List<Long> fairWaitTimes = new ArrayList<>();
private final List<Long> unfairWaitTimes = new ArrayList<>();

public void fairLockOperation() {
long startTime = System.nanoTime();
fairLock.lock();
try {
fairWaitTimes.add(System.nanoTime() - startTime);
Thread.sleep(1);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
fairLock.unlock();
}
}

public void unfairLockOperation() {
long startTime = System.nanoTime();
unfairLock.lock();
try {
unfairWaitTimes.add(System.nanoTime() - startTime);
Thread.sleep(1);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
unfairLock.unlock();
}
}

public double getAverageFairWaitTime() {
return fairWaitTimes.stream()
.mapToLong(Long::longValue)
.average()
.orElse(0.0) / 1_000_000;
}

public double getAverageUnfairWaitTime() {
return unfairWaitTimes.stream()
.mapToLong(Long::longValue)
.average()
.orElse(0.0) / 1_000_000;
}
}

测试:

1
2
private static final int THREAD_COUNT = 3;
private static final int ITERATIONS = 5;
  1. 测试基本性能
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
testBasicPerformance();

private static void testBasicPerformance() throws InterruptedException {
System.out.println("\n=== BASIC-PERFORMANCE-TEST ===");
Locks Locks = new Locks();
CountDownLatch fairLatch = new CountDownLatch(THREAD_COUNT);
CountDownLatch unfairLatch = new CountDownLatch(THREAD_COUNT);
long fairStart = System.nanoTime();
for (int i = 0; i < THREAD_COUNT; i++) {
new Thread(() -> {
for (int j = 0; j < ITERATIONS; j++) {
Locks.fairIncrement();
}
fairLatch.countDown();
}).start();
}
fairLatch.await();
long fairTime = System.nanoTime() - fairStart;
long unfairStart = System.nanoTime();
for (int i = 0; i < THREAD_COUNT; i++) {
new Thread(() -> {
for (int j = 0; j < ITERATIONS; j++) {
Locks.unfairIncrement();
}
unfairLatch.countDown();
}).start();
}
unfairLatch.await();
long unfairTime = System.nanoTime() - unfairStart;
System.out.printf("fair lock execution time: %dms%n", fairTime / 1_000_000);
System.out.printf("unfair lock execution time: %dms%n", unfairTime / 1_000_000);
}

=== BASIC-PERFORMANCE-TEST ===
fair lock execution time: 53ms
unfair lock execution time: 23ms
  1. 测试等待时间
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
private static void testWaitingTime() throws InterruptedException {
System.out.println("\n=== WAIT-TIME-TEST ===");
MonitoredLock monitoredLock = new MonitoredLock();
CountDownLatch latch = new CountDownLatch(THREAD_COUNT * 2);
for (int i = 0; i < THREAD_COUNT; i++) {
new Thread(() -> {
for (int j = 0; j < ITERATIONS; j++) {
monitoredLock.fairLockOperation();
}
latch.countDown();
}).start();
new Thread(() -> {
for (int j = 0; j < ITERATIONS; j++) {
monitoredLock.unfairLockOperation();
}
latch.countDown();
}).start();
}
latch.await();
System.out.printf("fair lock average wait time: %.2fms%n", monitoredLock.getAverageFairWaitTime());
System.out.printf("unfair lock average wait time: %.2fms%n", monitoredLock.getAverageUnfairWaitTime());
}

=== WAIT-TIME-TEST ===
fair lock average wait time: 2.32ms
unfair lock average wait time: 1.28ms
  1. 测试吞吐量
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
private static void testThroughput() throws InterruptedException {
System.out.println("\n=== THROUGHPUT-TEST ===");
Locks Locks = new Locks();
int testDuration = 5000;
AtomicInteger fairOps = new AtomicInteger(0);
AtomicInteger unfairOps = new AtomicInteger(0);
Thread[] fairThreads = new Thread[THREAD_COUNT];
for (int i = 0; i < THREAD_COUNT; i++) {
fairThreads[i] = new Thread(() -> {
while (!Thread.interrupted()) {
Locks.fairIncrement();
fairOps.incrementAndGet();
}
});
fairThreads[i].start();
}
Thread.sleep(testDuration);
for (Thread t : fairThreads) {
t.interrupt();
}
for (Thread t : fairThreads) {
t.join();
}
int fairThroughput = fairOps.get();
fairOps.set(0);
Thread[] unfairThreads = new Thread[THREAD_COUNT];
for (int i = 0; i < THREAD_COUNT; i++) {
unfairThreads[i] = new Thread(() -> {
while (!Thread.interrupted()) {
Locks.unfairIncrement();
unfairOps.incrementAndGet();
}
});
unfairThreads[i].start();
}
Thread.sleep(testDuration);
for (Thread t : unfairThreads) {
t.interrupt();
}
for (Thread t : unfairThreads) {
t.join();
}
int unfairThroughput = unfairOps.get();
System.out.printf("fair lock throughput: %d ops/sec%n", fairThroughput / (testDuration / 1000));
System.out.printf("unfair lock throughput: %d ops/sec%n", unfairThroughput / (testDuration / 1000));
}

=== THROUGHPUT-TEST ===
fair lock throughput: 687 ops/sec
unfair lock throughput: 682 ops/sec
  1. 测试执行顺序
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
private static void testFairness() throws InterruptedException {
System.out.println("\n=== FAIRNESS-TEST ===");
List<Integer> fairOrder = new ArrayList<>();
List<Integer> unfairOrder = new ArrayList<>();
ReentrantLock fairLock = new ReentrantLock(true);
ReentrantLock unfairLock = new ReentrantLock(false);
CountDownLatch fairLatch = new CountDownLatch(THREAD_COUNT);
CountDownLatch unfairLatch = new CountDownLatch(THREAD_COUNT);
for (int i = 0; i < THREAD_COUNT; i++) {
final int threadId = i;
new Thread(() -> {
for (int j = 0; j < ITERATIONS; j++) {
fairLock.lock();
try {
fairOrder.add(threadId);
Thread.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
fairLock.unlock();
}
}
fairLatch.countDown();
}).start();
}
for (int i = 0; i < THREAD_COUNT; i++) {
final int threadId = i;
new Thread(() -> {
for (int j = 0; j < ITERATIONS; j++) {
unfairLock.lock();
try {
unfairOrder.add(threadId);
Thread.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
unfairLock.unlock();
}
}
unfairLatch.countDown();
}).start();
}
fairLatch.await();
unfairLatch.await();
System.out.println("fair lock execution order: " + fairOrder);
System.out.println("unfair lock execution order: " + unfairOrder);
}

=== FAIRNESS-TEST ===
fair lock execution order: [0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2]
unfair lock execution order: [0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0, 0, 0]

可重入锁 vs 非可重入锁

独享锁 vs 共享锁

synchronized

synchronized:Java 中用于实现线程同步的关键字,可以确保多个线程在访问共享资源时的线程安全性。

  1. 修饰实例方法:锁对象是当前实例(this),同一时刻,只有一个线程可以执行该实例的 synchronized 方法,其他线程需要等待锁释放。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// 1
public class Test {
public synchronized void method() {
// do something
}
}

// 2
public class Test {
public void method() {
synchronized (this) {
// do something
}
}
}
  1. 修饰静态方法:锁对象是当前类的 Class 对象(Test.class),同一时刻,只有一个线程可以执行该类的 synchronized 静态方法。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// 1
public class Test {
public static synchronized void staticMethod() {
// do something
}
}

// 2
public class Test {
public static void staticMethod() {
synchronized (Test.class) {
// do something
}
}
}
  1. 修饰对象:锁对象是 lockObject,可以是任意对象。
1
2
3
4
5
6
7
public class Test {
public void method() {
synchronized (lockObject) {
// do something
}
}
}

synchronized 的底层实现依赖于 JVM 的 监视器锁(Monitor) 机制。synchronized 就是通过获取和释放该锁来实现线程同步的。

  • 每个对象都与一个监视器锁关联。
  • 当线程进入 synchronized 代码块时,会尝试获取对象的监视器锁。
  • 如果锁被其他线程持有,当前线程会进入阻塞状态,直到锁被释放。
  • 当线程退出 synchronized 代码块时,会释放监视器锁。

在 JVM 中,每个对象都有一个对象头,对象头中包含 Mark Word,Mark Word 存储了对象的哈希码、GC 分代年龄、锁状态等信息。当使用 synchronized 时,JVM 会通过修改 Mark Word 来记录锁的状态。

JVM 对 synchronized 进行了优化,引入了 锁升级 机制,锁的状态会从低到高逐步升级:

  1. 无锁:初始状态,没有线程竞争锁。
  2. 偏向锁:当只有一个线程访问同步代码时,JVM 会将该线程的 ID 记录在 Mark Word 中,并将锁标记为偏向锁。偏向锁的目的是减少无竞争情况下的同步开销。
  3. 轻量级锁:当有多个线程竞争锁时,偏向锁会升级为轻量级锁。轻量级锁通过 CAS(Compare-And-Swap)操作来尝试获取锁。
  4. 重量级锁:当竞争激烈时,轻量级锁会升级为重量级锁。重量级锁依赖于操作系统的互斥量(Mutex),线程会进入阻塞状态,等待锁释放。

实例 1:修饰普通方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public class SynchronizedObjectLock implements Runnable {
static SynchronizedObjectLock instance = new SynchronizedObjectLock();

@Override
public void run() {
synchronized (this) {
System.out.println("I'm thread " + Thread.currentThread().getName());
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(Thread.currentThread().getName() + " done");
}
}

public static void main(String[] args) {
Thread t1 = new Thread(instance);
Thread t2 = new Thread(instance);
t1.start();
t2.start();
}
}

I'm thread Thread-0
Thread-0 done
I'm thread Thread-1
Thread-1 done

等价于:

1
2
3
4
5
6
7
8
9
10
@Override
public synchronized void run() {
System.out.println("I'm thread " + Thread.currentThread().getName());
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(Thread.currentThread().getName() + " done");
}

实例 2:修饰静态方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class SynchronizedObjectLock implements Runnable {
static SynchronizedObjectLock instance1 = new SynchronizedObjectLock();
static SynchronizedObjectLock instance2 = new SynchronizedObjectLock();

@Override
public void run() {
staticMethod();
}

private static void staticMethod() {
synchronized (SynchronizedObjectLock.class) {
System.out.println("I'm thread " + Thread.currentThread().getName());
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(Thread.currentThread().getName() + " done");
}
}

public static void main(String[] args) {
Thread t1 = new Thread(instance1);
Thread t2 = new Thread(instance2);
t1.start();
t2.start();
}
}

I'm thread Thread-0
Thread-0 done
I'm thread Thread-1
Thread-1 done

等价于:

1
2
3
4
5
6
7
8
9
private synchronized  static void staticMethod() {
System.out.println("I'm thread " + Thread.currentThread().getName());
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(Thread.currentThread().getName() + " done");
}

实例 3:指定锁对象为 Class 对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public class SynchronizedObjectLock implements Runnable {
static SynchronizedObjectLock instance = new SynchronizedObjectLock();

@Override
public void run() {
synchronized (SynchronizedObjectLock.class) {
System.out.println("I'm thread " + Thread.currentThread().getName());
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(Thread.currentThread().getName() + " done");
}
}

public static void main(String[] args) {
Thread t1 = new Thread(instance);
Thread t2 = new Thread(instance);
t1.start();
t2.start();
}
}

I'm thread Thread-0
Thread-0 done
I'm thread Thread-1
Thread-1 done

实例 4:指定锁对象为自定义锁

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
public class SynchronizedObjectLock implements Runnable {
static SynchronizedObjectLock instance = new SynchronizedObjectLock();
Object block1 = new Object();
Object block2 = new Object();

@Override
public void run() {
synchronized (block1) {
System.out.println("Block1 lock, I'm the thread " + Thread.currentThread().getName());
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Block1 lock, " + Thread.currentThread().getName() + " done");
}

synchronized (block2) {
System.out.println("Block2 lock, I'm the thread " + Thread.currentThread().getName());
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Block2 lock, " + Thread.currentThread().getName() + " done");
}
}

public static void main(String[] args) {
Thread t1 = new Thread(instance);
Thread t2 = new Thread(instance);
t1.start();
t2.start();
}
}

Block1 lock, I'm the thread Thread-0
Block1 lock, Thread-0 done
Block2 lock, I'm the thread Thread-0
Block1 lock, I'm the thread Thread-1
Block1 lock, Thread-1 done
Block2 lock, Thread-0 done
Block2 lock, I'm the thread Thread-1
Block2 lock, Thread-1 done

注意 Block2 lock, I'm the thread Thread-0Block1 lock, I'm the thread Thread-1

当 Thread-0 执行完第一段同步代码块,Thread-0 进入第二段代码块,然后 Thread-1 进入第一段同步代码块。

volatile

volatile:提供了一种轻量级的同步机制。

  • 可见性:一个线程修改了 volatile 变量的值,其他线程能立即看到。
  • 禁止指令重排序:防止编译器和处理器对代码进行优化重排。

注意,volatile 不保证原子性


内存可见性机制

  • 普通变量:线程将值从主内存拷贝到工作内存,操作完后才会刷新回主内存。
  • volatile 变量:线程写入操作会立即刷新到主内存,读取操作会直接从主内存获取。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public class TestVolatile {
private boolean flag = false;

public void demonstrateVisibilityIssue() throws InterruptedException {
Thread readerThread = new Thread(() -> {
while (!flag) {
}
System.out.println("Reader thread sees the updated flag!");
});
readerThread.start();
Thread.sleep(100);
flag = true;
System.out.println("Main thread updated flag to true");
readerThread.join(3000);
if (readerThread.isAlive()) {
System.out.println("Reader thread didn't see the flag update!");
readerThread.interrupt();
}
}

public static void main(String[] args) throws InterruptedException {
TestVolatile t = new TestVolatile();
t.demonstrateVisibilityIssue();
}
}

Main thread updated flag to true
Reader thread didn't see the flag update!
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public class TestVolatile {
private volatile boolean flag = false;

public void demonstrateVisibilityIssue() throws InterruptedException {
Thread readerThread = new Thread(() -> {
while (!flag) {
}
System.out.println("Reader thread sees the updated flag!");
});
readerThread.start();
Thread.sleep(100);
flag = true;
System.out.println("Main thread updated flag to true");
readerThread.join(3000);
if (readerThread.isAlive()) {
System.out.println("Reader thread didn't see the flag update!");
readerThread.interrupt();
}
}

public static void main(String[] args) throws InterruptedException {
TestVolatile t = new TestVolatile();
t.demonstrateVisibilityIssue();
}
}

Reader thread sees the updated flag!
Main thread updated flag to true

内存屏障/内存栅栏(Memory Barrier)volatile 的内存语义是通过内存屏障实现的。

为了控制重排序,JVM 定义了四种内存策略:

  1. StoreStore屏障:在 store1 指令和 store1 指令之间加上 StoreStore 屏障,强制先执行 store1 指令再执行 store2 指令;store1 指令和 store2 指令不能进行重排序(StoreStore 屏障的前面 store 指令禁止和屏障后面的 store 指令进行重排序)。
  2. StoreLoad屏障:在 store1 指令和 load2 指令之间加上 StoreLoad 屏障,强制先执行 store1 指令再执行 load2 指令;store1 指令和 load2 指令不能进行重排序(StoreLoad 屏障的前面 store 指令禁止和屏障后面的 load 指令进行重排序)。
  3. LoadLoad屏障:在 load1 指令和 load2 指令之间加上 LoadLoad 屏障,强制先执行 load1 指令再执行 load2 指令;load1 指令和 load2 指令不能进行重排序(LoadLoad 屏障的前面 load 指令禁止和屏障后面的 load 指令进行重排序)。
  4. LoadStore屏障:在 load1 指令和 store2 指令之间加上 LoadStore 屏障,强制先执行 load1 指令再执行 store2 指令;load1 指令和 store2 指令不能进行重排序(LoadStore 屏障的前面 load 指令禁止和屏障后面的 store 指令进行重排序)。

对于编译器来说,发现一个最优布置来最小化插入屏障的总数几乎是不可能的,为此,JMM 采取了保守的策略。

  • 在每个 volatile 写操作的前面插入一个 StoreStore 屏障。
  • 在每个 volatile 写操作的后面插入一个 StoreLoad 屏障。

  • 在每个volatile 读操作的后面插入一个 LoadLoad 屏障。

  • 在每个 volatile 读操作的后面插入一个 LoadStore 屏障。

volatile 写是在前面和后面分别插入内存屏障,而 volatile 读操作是在后面插入两个内存屏障。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public class VolatileNotAtomic {
private volatile int counter = 0;
public void increment() {
counter++;
}
public int getCounter() {
return counter;
}
public static void main(String[] args) throws InterruptedException {
final VolatileNotAtomic example = new VolatileNotAtomic();
final int threadCount = 10;
final int incrementsPerThread = 1000;
final CountDownLatch latch = new CountDownLatch(threadCount);
for (int i = 0; i < threadCount; i++) {
new Thread(() -> {
for (int j = 0; j < incrementsPerThread; j++) {
example.increment();
}
latch.countDown();
}).start();
}
latch.await();
System.out.println("Expected counter: " + (threadCount * incrementsPerThread));
System.out.println("Actual counter: " + example.getCounter());
}
}

Expected counter: 10000
Actual counter: 9941

final

JUC

JUC-原子类

JUC-锁

JUC-集合

JUC-线程池

JUC-工具类