Java Concurrency Concepts
- Thread actions are implemented by using method
run
of aRunnable
interface. - Thread is scheduled to run using
start
method of aThread
class. - Thread scheduler will allocate portions of CPU time (time-slice) to excute thread actions.
- The return of method
main
orrun
terminates the thread. - Concurreny doesn't mean actual phsical parallel execution.
- In which order threads will actually perform the actions is really not predictable, this is a stochastic process.
- You can't put 2 different pieces of logic from different threads to take the same time-slice on the same CPU core.
- There is some degree of parallelism but it depends.
Thread Life Cycle
- You can go from whatever state to runnable state (except
terminated
) and from runnable state to whatever other state exceptNew
. - Almost every state needs to transit throught the
runnable
state. - Transitions to the running state are not immediate -- thread scheduler needs to allocate next available CPU time slot for this thread.
boolean b = thread1.isAlive();
Thread.State phase = thread1.getState();
Because of the parallelism which is behind the scenes, the state could have changed, in other words, it's not exactly up-to-date.- A thread in a
runnable
state may check if it has received an interrupt signal. - A thread that has entered a
waiting
ortimed waiting
state must catchInterruptedException
, which puts it back torunnable
state, and then decide what it should do. - How would the thread react to an interrupt signal is up to the programmer.
Block Thread
Monitor object helps to coordinate order of executions of threads.
- Any object can be used as a monitor.
- It allows threads to enter blocked or waiting states.
- It enables mutual exclusion of threads and signaling mechanisms.
synchronized
enforces exclusive access to the block of code. A thread would be blocked waiting for the previous thread to complete execution of a synchronized block before it can proceed.- If
synchronized
is used to block against static context, then all the different instances of the same class will be blocking each other against the class itself, as class context is shared among all instances.
Make Thread Wait Until Notified
wait()
puts a thread into waiting state against specific monitor.- Any number of threads can be waiting against the same monitor.
notify()
wakes up one of the waiting threads (stochastic).notifyAll()
wakes up all waiting threads.wait()
/notify()
/notifyAll()
must be invoked withinsynchronized
blocks against the same monitor.- If
notify()
is called, then one of the threads which is currently in a waiting state against a given monitor will receive anInterruptedException
.
Priority
A construct that allows to attempt to control the size of time slice given to a thread on a CPU.
A thread with higher priority is presumably get more CPU time.
Work-Stealing Pool
If one thread has finished its work and has nothing to do, it can "steal" work from other threads' queue.
This mechanism has already used by ForkJoinPool in Java and is highly useful when the task can spawn other subtasks, which can be proactively picked up by any available thread, reducing the thread idle time.
Locking Problems
Starvation
Thread is waiting for a resource blocked by another busy thread.
Livelock
Threads form an indefinite loop, expecting confirmation of completion from each other.
Deadlock
Two or more threads are blocked forever, waiting for each other.
If your workload is not high enough, or if your threads are not competing for CPU time, if there is plenty of CPU time, if you don't have high degree of concurrency, you may never experience the issue, it's all stochastic. The occurrence of these issues is basically based on probabilities of how many threads you're running, how many concurrent actions you're trying to execute at the same time, how busy is your CPU, how many CPU cores you have.
It's very diffcult to trace to find where the issue occurs. You may not get any exceptions, or any obvious tangible error from it, the program just slows down.
Solutions to this problem:
- Don't try to control the order of execution, let threads do things in parallel concurrently as much as they can.
- Engineering your code in the first place, not to have dependencies between threads
Writing Thread-Safe Code
Stack values such as local variables and method arguments are thread-safe.
- Each thread operates with its own stack.
- No other thread can see this portion of memory.
Immutable objects in a shared heap memory is thread-safe.
- Heap memory is shared between all threads.
Heap values undergoing modifications may be:
- Inconsistent -- observed by other threads before modification is completed.
- Corrupted -- partially changed (or half written) by another thread writing to memory at the same time.
Consider creating copies of objects that will be specific to a given thread that other threads don't look at. Then synchronize them with main memory after modifications.
Ensure Consistent Access to Shared Data
Disable compiler optimization that is caching the shared value locally within a thread.
volatile
instructs Java compiler:
- Not to cache the shared variable locally.
- Always read it from the main memory.
- Applies all changes to the main memory that occurred in a thread before the update of the volatile variable.
When you place a volatile
keyword on a variable y
, you force it not to be optimized, you force JVM to keep it in a heap and to make sure that any stack representations of that value are synchronized with a heap.
In this way, the program is running slightly slower because it has to jump from stack to heap every time.
When the value of y
is changed, it doesn't mean that the other thread will react to that immediately, as the thread may not get the time slot on CPU immediately.
In other words, it's not really possible to predict how the program is going to perform as before, because you can't tell the order in which these threads would get CPU time to execute their instructions.
But eventually it will react because you're forcing these representations of y
to actually point to the same object in the heap.
Non-Blocking Atomic Actions
Action is atomic if it's guaranteed to be performed by a thread without an interruption.
- Atomic actions cannot be interleaved.
- Only actions peformed by a CPU in a single cycle are by default atomic.
Variable assignments are atomic actions, except
long
anddouble
; these are 64-bit values, and it takes a certain amount of CPU time, or more than a single step (split the number into two, perform the operation in two parts of the number, then combine them) to assign these on a 32-bit platform.- The thread may halfway through these actions, run out of the allocated time slot on a CPU, in which case it will be interrupted.
- Between the point in time when the action was interrupted, and the point in time when the thread is resumed, if another thread looked at that value, it
- Other operations like
+
,-
,*
,/
,%
,++
,--
, etc, are not atomic. java.util.concurrent.atomic
provides classes that implement lock-free and thread-safe programming of atomic behaviors on single variables, for example:AtmoicBoolean
AtomicInteger
AtomicLong
AtomicReference<T>
- Atomic variables also behave as if they are
volatile
.
Ensure Exclusive Object Access Using Intrinsic Locks
Use intrinsic lock to enforce an exclusive access to a shared object:
- Order of execution and object consistency are ensured.
- Synchronized logic creates a bottleneck in a multithreaded application.
- Performance and scalability can be significantly degraded.
Intrinsic Locks Automation
Some Java APIs provide synchronized versions of objects:
Collections
provides synchronized wrappers for Collection, Set, List, and Map objects.- Operations such as add and remove are already synchronized to ensure consistent access to the collection content.
Non-Blocking Concurrency Automation
Classes such as CopyOnWriteArrayList
or CopyOnWriteArraySet
provide thread-safe variants of List
and Set
to manage concurrency.
- All mutative operations make fresh copies of underlying collections, in other words, a copy will be created which is specific to the thread.
- Threads will perform modifications on their own copies, and these copies will be merged automatically.
- The read-only snapshot of merge content is used for traversal.
- It is best for small collections, where read-only operations vastly outnumber mutative operations, and prevent interference among threads during traversal.
Alternative Locking Mechanisms
Locking API provides more flexible programmatic concurrency control mechanisms.
- Write lock prevents other threads from concurrently modifying the object.
- Read lock can be aquired if Write lock is not held by another thread, allowing concurrent read actions.
- Avoid excessive locking.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。