cpl 2016, week 4 - inter-thread communication€¦ · inter-threadcommunication 75/101confinements...
Post on 08-Jul-2020
1 Views
Preview:
TRANSCRIPT
CPL 2016, week 4Inter-thread communication
Oleg Batrashev
Institute of Computer Science, Tartu, Estonia
March 2, 2016
Overview
Studies so far:1. Inter-thread visibility: JMM2. Inter-thread synchronization: locks and monitors3. Thread management: executors, tasks, cancelation
Today:I Inter-thread communication
Next week:I Actor model
Inter-thread communication 75/101 Confinements -
Outline
Inter-thread communicationConfinementsQueues and messages
Producer/consumer patternPros and cons
Advanced patternsBack pressureNon-FIFO queues
Channels vs streamsNon-determinismDeclarative concurrency
Inter-thread communication 76/101 Confinements -
Confinement idea
I need to syncrhonize data accesses from multiple threadsI ensure some other way, that the data is accessed from single
thread at a timeI use language semantics, programming pattern/protocol, design
rules...I if we can do that, no locking is needed!
I confine (bound) data to
1. single method/function, sequence of methods2. single thread3. single object4. single object/thread from a group of objects dynamically
I i.e. data is changing “ownership” over time
Inter-thread communication 77/101 Confinements -
Method confinement
Within single method only executing thread can access:I local variables of primitive type,I new objects, which reference do not escape.
An object reference escapes if:1. it is passed as an argument to another method/constructor2. it is returned from method invocation3. stored to global variable (static or object field, accessible from
another thread)4. if another object escapes that can be traversed to this object
Inter-thread communication 78/101 Confinements -
Hand-off and other protocolsAlternative to using exclusively inside a method:
I pass “ownership” (read/write) of the object (hand-off)I tail call hand-off – ensure the object is never used locally after
it escapes
Location loc = new Location (10 ,20); // use loc hereotherMethod(loc); // do not use loc here after
}
I caller copies, receiver copies the objectI use scalar (primitive) arguments
otherMethod(loc.lon , loc.lat);
I trust – beleive a callee does not publish the object, so it canbe used without locking after the callee returns
I similar problem exists for a single thread – when passingLocation to a method, can we be sure it is not changed?
Inter-thread communication 79/101 Confinements -
Thread confinementMake sure data is only visible from single thread:
I private field in Thread subclassI make sure it is accessed only from the run() method or its
private helper subroutinesI similarly, can be private field in RunnableI dedicated objects are also ok if references are not escaped
I ThreadLocal in JavaI every thread has its own version of the reference stored in the
ThreadLocal objectI often used for sessions and other contexts
Thread confinement is essential part of Actor model (next lecture)and CSP (Concurrent Sequential Processes [read Google Golanguage])
I how it can be useful if accessed from single thread? see queuesand messages
Inter-thread communication 80/101 Confinements -
Object confinement
An object is accessed from single thread ifI is only visible within another (encapsulation in OOP),I is only accessed from synchronized methods
I does not escape
In other words, synchronization is delegated to the containingobject
I remember, proper usage may require client synchronizationlist data is escaped through its iterators
Inter-thread communication 81/101 Confinements -
Confinement within groupsConsider example:1. data is confined to an object
I object responsibility is to synchronize access to it
2. take() returns the data but deletes reference in the objectI remember collection example with 2 locks (Refining locks)
3. now data is confined to the threadI only the thread can access it through its local variable
4. put data to some other object and do not use anymore in thethread
5. now data is confined to another objectVariants and names:
I known as tokens, capabilities, resourcesI ring communication in hardware or software to pass the token
Requires careful code design.
Inter-thread communication 82/101 Queues and messages -
Outline
Inter-thread communicationConfinementsQueues and messages
Producer/consumer patternPros and cons
Advanced patternsBack pressureNon-FIFO queues
Channels vs streamsNon-determinismDeclarative concurrency
Inter-thread communication 83/101 Queues and messages -
Total thread confinement
In ideal situation, data is completely confined to (two) threads if:I they run on different machines,I they communicate by sending messages over the network.
In JVM threads may almost acheive this if:I almost all data is local to threads,
I except queues that contain messages and reflect networkcommunication;
I data is cloned (duplicated) if sent using message to anotherthread.
No synchronization is needed, except queue access!I cloning data is inefficient – consider immutable objects.
Inter-thread communication 84/101 Queues and messages -
Immutable messages
1. mutable Location objects passed inside Traveler toServer.push()
public class Location {public double lon , lat;
I end up being encoded to the network socket,I modified in place by the simulation,I if done from two different threads lon and lat may have values
from different simulation runs;I should copy the location once Server gets its own thread.
2. immutable Point objects has no such problem and may bepassed to different threads without worry
public class Point {public final int x, y;
Messages are passed from one thread to another – it may be goodidea to make them immutable.
Inter-thread communication 85/101 Queues and messages -
Java Queue interfaces
From Java docs of the Queue and BlockingQueue interfacesThrows
exception
Returns
special value
Blocks Times out
Insert add(e) offer(e) put(e) offer(e,time,unit)Remove remove() poll() take() poll(time,unit)Examine element() peek() - -
I first line 4 methods to insert elements into queueI second line 4 methods to remove elements from queueI third line 2 methods to examine the queue without removing
elementsI first 2 columns differ only if queue is empty or fullI last 2 columns are only available for BlockingQueue
Inter-thread communication 86/101 Queues and messages - Producer/consumer pattern
Producer and consumer
The simplest scenario:I one or several threads put elements to the queue – producersI one or several threads take elements from the queue and
process them – consumersQueue implementation in Java are
I ConcurrentLinkedQueue -- unbounded; non-blockingI non-blocking write – obvious because the queue is unlimited,I non-blocking read – you cannot wait for an element to appear
in the queue;
I LinkedBlockingQueue – unbounded; blocking readI ArrayBlockingQueue – bounded; blocking
Inter-thread communication 87/101 Queues and messages - Producer/consumer pattern
Transducers
One thread may be both consumer and producer – transducer(transformer)
I take element from one queueI transform it, e.g.
I throw away not needed elements (filter)I extract useful information from each element (map)I aggregate messages and produce compound value (reduce)
I put result into another queue
while (isRunning) {e = queue1.take ();if (! isAcceptable(e)) continue;aggrValue += e.value;if (aggrValue > 100) queue2.put(new Message(aggrValue ));
}
Inter-thread communication 88/101 Queues and messages - Pros and cons
Deadlock free executionConsider application with producers/consumers/transducers only:
Producer queue1 TransducerA
TransducerB queue2
I messages and their referenced objects are either immutable,copied, or ensured not to be used when sent out
I i.e. no synchronization on message contents are needed
I threads need to synchronize only on queue access,I no thread ever takes two locks at a time.
Hence this system is deadlock free! It may stall only because oflack of generated messages.
I This is essentially confinement within groups pattern.I Actual idea – limit thread execution boundaries.
I no thread walks from network layer to GUI layer!
Inter-thread communication 89/101 Queues and messages - Pros and cons
Problem with queues
Not everything is so easy though...I Unbounded queues may lead to the exhaustion of (memory)
resources. Solutions:I stop producers through back-pressure: its simplest form is
bounded queue;I do not let the queue grow: throw away or aggregate messages.
I Race conditions are still there and even more, for example:
1. Producer generates messages data and stop;2. TransducerA reads from queue1, puts result to queue2;3. TransducerB reads and puts transformed message to queue1;4. TransducerA processes this transformed message.
I It may happen that stop message is put by the producerbetween 1 and 3, in which case the transformed message is putafter the stop is processed by TransducerA.
I It may not happen with single thread “penetrating all layers”.
Inter-thread communication 90/101 Advanced patterns -
Outline
Inter-thread communicationConfinementsQueues and messages
Producer/consumer patternPros and cons
Advanced patternsBack pressureNon-FIFO queues
Channels vs streamsNon-determinismDeclarative concurrency
Inter-thread communication 91/101 Advanced patterns - Back pressure
Bounded blocking queues
Back pressure – stop or slower the producer if the consumerscannot keep in pace.Simplest back pressure solution, make the queue that is:
I bounded – limit size of the queue,I blocking – block the thread if the queue is full.
Producer/transducer may be completely stopped by the full queue.It may end up in the deadlock:
Producer queue1 TransducerA
TransducerB queue2
1. TransducerA generates too many messages and is blocked onqueue2,
2. TransducerB is blocked on queue1 by the same reason.
Inter-thread communication 92/101 Advanced patterns - Back pressure
Advanced back pressure techniques
Some more sophisticated techniques:I Do not limit the queues but implement smarter transducers:
I communicate back to producer if ready to accept newmessages,
I producer suspends generating new messages until receivesnotification from the consumer,
I for example, TCP protocol.
I Deadlock recognition by the thread scheduler that may enlargecertain queues.
I Scheduler looks at the queues and threads andsuspends/resumes threads that threaten the execution (floodthe queues).
Inter-thread communication 93/101 Advanced patterns - Back pressure
Synchronous queue
There is synchronous queue, where producer and consumer blockuntil message is transfered:
I blocking queues with limit 0I Java SynchronousQueueI Go language has built-in synchronous queues called channels
Block producer until certain message is transfered:I Java TransferQueue
I “A BlockingQueue in which producers may wait for consumersto receive elements.”
Inter-thread communication 94/101 Advanced patterns - Non-FIFO queues
Discarding/aggregating queue
Possibilities for a bounded queue that is full:I discard new messages – bounded non-blocking queue
I use add(e) or offer(e) in Java Queue interface
I discard old messages – overwrite old data in the queueI Java existing implementation?I for example traveler old coordinates are not relevant.
I discard unimportant messages – (see also priority queue)I aggregating – combine messages based on some criteria.
Remember filter/map/reduce in transducers – boundary betweentransducers and queues is blured:
I queue borrows producer thread for short period tofilter/aggregate elements.
Inter-thread communication 95/101 Advanced patterns - Non-FIFO queues
Priority queues and delayed queues
Deliver message in the different order, in Java:I PriorityQueue – unbounded queue
I priority specified by the natural ordering of elements orComparator object
I PriorityBlockingQueue – also unbounded but supportsblocking read
Deliver message in the different order by applying delay:I in Java DelayQueueI ordering is specified by the delay during message add(e)I take() returns only when time for some message has expired
Inter-thread communication 96/101 Advanced patterns - Non-FIFO queues
Paced executionConsider cases:
I do not want to redraw more often than 24 times a secondI may take many messages at once and update our state
Paced execution – aggregate updates (messages) and delay beforerunning the code:
I may hugely increase efficiency of the applicationI need careful combination of delays and aggregation
Example - update GUI not more often than once in 50ms:I DelayQueue, put
I data update messages with delay 0I GUI update message with delay 50
I Upon receiveI data update is aggregated into existing collectionI GUI message checks if new data has been received, puts
aggregated data to the next queue and re-schedules itself inanother 50ms
Inter-thread communication 97/101 Channels vs streams -
Outline
Inter-thread communicationConfinementsQueues and messages
Producer/consumer patternPros and cons
Advanced patternsBack pressureNon-FIFO queues
Channels vs streamsNon-determinismDeclarative concurrency
Inter-thread communication 98/101 Channels vs streams - Non-determinism
Message preserving readAre multi-threaded programs doomed to have race conditions?
I one thread writes numbers 1 to 10 into the queueI another thread reads them, sums up and prints outI are different results possible?
I two threads read from the queue...I two threads iterate the queue...I two threads write to the queue + reading thread multiplies by
the value from every second message...
If every thread iterates a queueI it does not consume the messages and leaves them for other
threads to readI in some languages it is stream, as opposed to channel
I no other sources of non-determinism (next slide)I then there are no race conditions and the program is
completely deterministic.
Inter-thread communication 99/101 Channels vs streams - Non-determinism
Sources of non-determinism
I Math.random()I polling – do another thing if message has not yet arrivedI several threads consume the queue with no order
I iterating the queue is still ok
I several threads write to the same queue with no orderI a thread does not try to read from 2 queues with no order
I e.g. reading one here after one there is still ok
Every such source of non-determinism may cause race condition,otherwise there are none.
Inter-thread communication 100/101 Channels vs streams - Declarative concurrency
Race condition free execution
Consider map/filter/reduce execution:I every part (function) runs in its own threadI threads wait for data to be available - reading streams of data
Final result is deterministic (number crunching usually must be).Other names:
I lockstep execution, declarative concurrencyProblems with such execution:
I bad efficiency in high latency networks – cannot re-ordermessages
I completely broken on exception in single partI completely broken on network single link failure
We need more independent components - e.g. Actor model (nextlecture)
Inter-thread communication 101/101 Channels vs streams - Declarative concurrency
Summary
I confinement hides data in an owner: method, thread or objectI confinement within groups allows to change ownership for the
objectI immutable objects help in concurrencyI structure your aplication using consumers/producers and
queues that they use for communicationI avoids thread knots – no deadlocks with unbounded queuesI usually need to bound queues to avoid out-of-memory
I back presure technques as well as non-FIFO queues may helpI race condition free concurrency is possible but has limited
application
top related