Concurrent Lock-Free and Wait-Free Algorithms Medha Vasanth School of Computer Science Carleton University Ottawa, Canada K1S 5B6 [email protected] December 10, 2012

Abstract The wide spread use of multicore and cluster configurations necessitates the need to efficiently achieve parallelism in all aspects of a system. This also necessitates the need to devise efficient and parallel data structures capable of performing operations concurrently on shared memory. This project implements a fast path slow path approach of a concurrent FIFO queue. The queue is implemented as a singly linked list supporting enqueue and dequeue operations. This approach combines the lock-free and wait-free algorithms in an efficient manner to achieve both scalability and starvation free properties and is the choice of algorithms for systems that require strict progress guarantee of each process in the system.

1

Introduction

Significant enhancements in hardware have led to the development of multicore systems capable of performing hundreds of operations simultaneously. Parallel data structures form an integral part of every system, since they are the basic building blocks of all programs that access or store data into the system. Hence there is a pressing need to devise parallel data structures that make efficient use of the available processing cores while providing progress guarantee. FIFO queues are the obvious choice for data structures because of its simplicity. In a FIFO queue, data elements are always inserted at the end of the queue and elements are deleted from the beginning. This project implements a parallel FIFO queue as a singly linked list with each node pointing to the neighboring node on the list. Some of the traditional methods of achieving parallelism are by blocking and non-blocking methods. Blocking methods involve applying a lock to the part of the data structure being modified and releasing the lock when the operation is complete. The locks are implemented as semaphores. This approach prevents all other processes from modifying the data structure there by making it unsuitable for practical use. Non-blocking methods ensure progress guarantee without preventing the execution of concurrent processes. There are two ways of achieving non-blocking behavior: lock-freedom and wait-freedom. If two or more processes are attempting to perform an operation on a shared data structure, lock-freedom ensures that any one process completes its operation in a finite number of steps. While this ensures that the program as a whole completes execution, it could lead to a condition where all except one process fails to complete its operation. On 1

the other hand, if two or more processes are trying to perform an operation, wait-freedom ensures that all competing processes complete their operation in a finite number of steps. Wait-freedom ensures a starvation free property, where no process remains blocked waiting for another process to complete its operation. Wait-freedom is usually achieved by employing a helping mechanism in which faster processes help slower processes complete their operation. But the helping mechanism incurs a certain implementation overhead and is generally considered inefficient. Linearizability is one of the most important property for non-blocking algorithms. Linearizability means that, for an external observer watching the operations on the shared data structure, it appears that the concurrent operations occur in the same order as the sequential execution of operations. In other words, the parallel execution preserves the order of the sequential execution. This property ensures correctness of the algorithm. Wait-freedom is required in systems that operate under real time constraints or service level agreements such as hard and soft deadlines. It is also necessary in heterogeneous environments where some processes may always outperform others such as CPUs and GPUs. This project implements the fast path slow path approach that provides a scalable and efficient wait-free version of a FIFO queue. The basic idea is to execute on the fast path and switch to the slow path only when the contention in the system is high. This approach makes the wait-free implementation as scalable and efficient as the lock-free implementation. The rest of the paper is organized as follows. Section 2 reviews relevant literature, Section 3 presents the building blocks of the fast-path slow-path approach, Section 4 describes the fast path slow path methodology, Section 5 presents some of the results obtained and Section 6 concludes the paper.

2

Literature Review

The traditional way to achieve parallelism among processes is by the mutual exclusion technique. In this technique, a process has exclusive access to the shared object in the critical section. Before and after the critical section, the process executes the entry and exit sections. One of the earliest implementations of the mutual exclusion algorithms was given by Lamport [8]. In his paper, Lamport makes an assumption that there is relatively low contention among processes and that processes can easily gain access into the critical section. Another assumption is that the code outside the critical section does not modify the shared variable (shared data structure). He presents an algorithm that uses exactly seven memory accesses for the shared memory object in the absence of contention. While he provides a proof of correctness and deadlock freedom, this algorithm incurs a situation in which all but one process could starve, hence it does not provide starvation freedom. The starvation problem of [8] was addressed by Anderson [14]. They propose a mutual exclusive algorithm using atomic reads and writes with O(log N) time complexity. This algorithm is scalable under heavy contention. Processes begin execution at the leaf of a binary arbitration tree. After executing their critical sections, they traverse the tree in the reverse order to execute the exit sections. They also propose a fast path algorithm that has a time complexity of O(1) for no contention. However, the time complexity of the fast path algorithm increases to O(N) for high contention. In [1], Anderson et al. combine [8] and their results in [14] to achieve a time complexity of O(log N) for high contention systems. This algorithm scales under high contention and

2

provides the starvation free property. A universal construction is one which accepts a sequential implementation of any object and automatically converts it into a parallel implementation. Herlihy [4] claims that the atomic read/write primitives cannot be used to construct concurrent implementations of even simple structures such as stacks, queues or sets. He proposes that a universal object can be constructed for a system of n processes only if the consensus number is greater than or equal to that of n itself. The following are some implementations of universal constructions. Herlihy [5] proposes a universal construction using the Fetch&Add1 primitive. The basic idea of his approach is two fold: every operation is implemented sequentially without synchronization, and then special memory management strategies are used to convert from the sequential implementation into the lock-free parallel implementation. However, this approach is applicable only for small objects, since the algorithm involves copying the shared object to make modifications. Chuong et al. [2] propose a transaction friendly universal construction. An implementation is said to be transaction friendly if it allows a process to exit from an uncompleted operation. A process executes the Perform procedure when it wishes to perform an operation, it receives cooperative helping from other processes using the Help procedure. Compare&Swap2 primitives are used to implement this construction. Fatourou and Kallimanis [3] propose a universal wait-free construction using the Fetch&Add and the load-linked(LL)/store-conditional(SC)3 primitives. They experimentally show that their algorithm (Sim) outperforms existing algorithms, although the theoretical complexity of their algorithm is unacceptable. They provide practical wait-free implementations for the stack and queue structures. The limitation of this approach is that it involves copying the shared object into local memory and is not transaction friendly. Also, this approach is not applicable for large data sets such as search trees. In [11], Prakash et. all propose a non-blocking FIFO queue algorithm. They introduce the concept of cooperative helping where each operation (enqueue or dequeue) is broken into two stages. Each process performing an operation maintains a snapshot of the queue, and uses this snapshot to determine if the operation can be performed. The enqueue and dequeue operations can only be performed at certain correct stages. The reader is referred to [11] for the correct stages at which the operations cab be performed. The disadvantage of this algorithm is that every process needs to take a snapshot of the queue, thereby requiring more memory. This problem has been fixed by J. D. Valois in [13]. In this algorithm, the enqueue operation is performed in two stages, updating the last node to point to the new node and then fixing the tail pointer. This eliminates the need for the snapshots to be maintained by each process. There are cases in which this the tail pointer lags behind the head resulting in the dequeue operation to fail. The head points to a dummy node at the beginning of the list. This ensures that the head always points to a node on the list and avoids dangling 1

the Fetch&Add primitive accepts a memory location and atomically increments its value the Compare&Swap primitive accepts three arguments: a memory location, and expected value and a new value. If the value in the memory location is the same as the expected value, the value at the memory location is changed to the new value and the operation is said to be successful. 3 the LL primitive returns the value in a memory location. A subsequent SC primitive changes the value in the memory location only if the previous value has not been updated 2

3

references. M. M. Michael and M. L. Scott [9] extend the idea in [13] and ensure that the tail never lags behind the head. It is one of the widely accepted algorithms in literature and one of the building blocks for the fast path slow path methodology. They prove that their algorithm is linearizable and non-blocking. The CAS (Compare and Swap) primitive is used to construct the non-blocking implementation. Another implementation inspired from the previous approach is the one proposed by Kogan and Petrank [6]. This paper proposes the first complete implementation of a wait-free FIFO queue with multiple enqueuers and dequeuers. They provide a cooperative helping mechanism, where one process helps another process complete its operation. Although the algorithm is slower than other lock-free implementations, it highly depends on the Operating System configuration and can be improved. They extend their idea of cooperative helping to implement the fast-path and slow-path algorithm [7], where a process helps another process only if it comes in way of execution of its own operation. This approach follows the fast-path (lock-free) implementation for no contention systems and reverts to the slow-path (wait-free) implementation when it detects contention. They prove that their approach is wait-free and linearizable. They have also extended the same idea to implement wait-free linked lists [12]. Another approach to implement concurrent objects is to use multiword CAS (MWCAS) primitives instead of single word CAS. A MWCAS is similar to the single word CAS, in that it accepts the number of words, list of addresses and a list of old and new values as arguments. The operation succeeds only if all the old values are equal to the values at the addresses. Moir [10] proposes a conditional wait-free algorithm using MWCAS primitives. This approach follows the wait-free path only when necessary. Complexity of this approach increases in handling the MWCAS operations efficiently and hence not widely used.

3

Building Blocks

The fast path slow path methodology involves executing on the fast path for most of the time and switching to the slow path only when the contention in the system is high. This methodology borrows heavily from the MS Queue and the KP Queue, these algorithms use the Compare and Swap primitive for all atomic operations and are presented below.

3.1

Michael and Scott Queue (MS Queue)

This is one of the most scalable algorithms presented in literature. There have been many optimizations of this algorithm to improve the non-blocking capabilities, but the fast path slow path methodology uses the base algorithm for simplicity.The algorithm models the queue as a singly linked list with two pointers. The head pointer always points to the first node, which is a dummy node and the tail pointer points to the last or the second last node on the list. The following sub-sections describe the enqueue and dequeue operations. 3.1.1

Enqueue Operation

The enqueue operation begins by a process allocating a new node from the list and assigning it a value. The process then checks to see if the tail is pointing to the last node on the list (the last node always has its next reference set to null. If so the process swings the next 4

reference of the last node to point to the newly allocated node and then swings tail to point to the last node on the list. Figure 1 illustrates this operation. If the tail pointer is pointing to the second last node (this could happen when an enqueue operation is already in progress by another process. In that case the process trying to perform an enqueue operation would see tail pointing to the second last node on the list), the process first swings tail to point to the last node and then completes the enqueue operation as described above. Figure 2 illustrates this operation.

Figure 1: Tail pointing to the last node

3.1.2

Figure 2: tail pointing to the second last node

Dequeue Operation

The dequeue operation is straight-forward. The value to be dequeued is returned to the caller and head pointer is swung to point to the next node on the list.The current node is de-allocated from the list. This is illustrated in Figure 3. A special case arises when the head and tail pointers point to the same node on the list but the next reference is not null, in this case, the tail is said to lag behind. The tail is swung to point to the last node and then the dequeue operation proceeds normally. If the head and tail are pointing to the same node and the next reference is null, then the dequeue operation fails indicating that the queue is empty. Unlike a dequeue operation, an enqueue operation always succeeds.

Figure 3: MS Queue dequeue operation This algorithm is proven to be non-blocking and scalable. The reader is referred to [9] for complete algorithm pseudocodes and proof of correctness.

3.2

Kogan and Petrank Queue (KP Queue)

The KP Queue provides the first complete wait-free implementation with multiple processes being able to perform enqueue and dequeue operations concurrently. This algorithm extends the idea of the MS queue with a helping mechanism in which faster processes help slower processes complete their operation. This involves the addition of two fields in the node 5

enqTid and deqTid which track the process that is currently performing an enqueue or a dequeue operation. In this algorithm, a process that begins an operation, choses a phase number which is greater than the phase numbers chosen by all other currently executing threads4 and records this into a special state array. The state array also contains a pending field that indicates that the operation is pending. This field is constantly updated to reflect the state of the operation. The state array is then traversed to find out all the threads that have a phase number smaller than the one chosen by the current thread. This process then helps all the other slower threads complete their operations and then tries to apply its own operation. The co-operative helping mechanism can be achieved by breaking each enqueue and dequeue operation into three atomic steps such that these steps can be performed by different processes. The steps are described as follows: • All concurrent threads performing the same operation on the linked list realize that an operation is in progress. This step ensures linearization. • The state array entry of the thread that invoked the operation is updated to indicate that the operation was linearized. • change the references according to the operation under progress and update the data structure to reflect its new state. 3.2.1

Enqueue Operation

The enqueue operation begins by a process allocating a node and assigning it a value. It then chooses a phase number greater than the ones chosen by currently running threads. The thread also records its thread id in the enqTid field of the node. It also sets the pending flag in the state array indicating that an enqueue operation is currently in progress. This defines the first linearization point for the enqueue operation (illustrated in Figure 4). At this stage other threads realize that an enqueue operation is in progress but cannot perform the enqueue operation before helping the currently executing thread. The enqueue operation is performed in a similar manner to the MS Queue, where the thread checks if the tail is currently pointing to the last node. If so the updates the last node to point to the newly allocated node. This is illustrated in Figure 5. It then updates the pending flag to false indicating that the node has been appended (Refer to Figure 6). This defines the second linearization point for the enqueue operation. Finally the thread swings the tail pointer to point to the newly allocated node and fixes the data structure to reflect the changes. Figure 7 shows the state of the queue after the new node has been inserted. 3.2.2

Dequeue Operation

Like the enqueue, the dequeue operation begins by the process choosing a phase number greater than the one chosen by all the currently executing threads. The next field in the state array is updated to refer to the first node on the list. This is illustrated in Figure 8. The deqTid field of the node is updated to the thread that is currently performing the dequeue operation. This defines the first linearization point for the dequeue operation. At this stage all other contending threads realize that a dequeue operation is in progress, they can only help the current operation but cannot perform their own dequeue operation.The 4

the term processes and threads are used interchangeably

6

Figure 4: Create node and enter details into state array

Figure 6: Set pending flag to false

Figure 5: Make the last node point to this node

Figure 7: Swing tail to point to new node

pending field of the state array for the current thread is set indicating that the dequeue operation is in progress (refer to Figure 9). This defines the second linearization point for the dequeue operation.At the next step, the pending flag is turned off, indicating that the node to be dequeued has been identified (illustrated in Figure 10). Finally the head pointer is swung to the next node on the list returning the value. The dequeued node is freed and all references are fixed to reflect the new state of the queue. Figure 11 illustrates these actions and the state of the queue after the dequeue operation is complete.

Figure 8: Enter details into the state array

Figure 9: Set deqTid to the thread id

Special care is taken to ensure that each operation is executed only once and constant updates to the state array ensures that the queue does not enter a transient state where all the processes are blocked. The reader is referred to [6] for detailed code snippets, exact linearization points and correctness of the algorithm.

7

Figure 10: Set pending flag to false

4

Figure 11: Swing head to next node and deallocate the first node

Fast path Slow path Methodology

The approach described in the KP Queue suffers from two major performance drawbacks: In most cases, the currently executing thread can complete its operation without help if it had been given enough time. Secondly all threads help the currently executing thread in performing the same operation. These drawbacks can be addresses with the fast path slow path methodology. According to this approach, the helping mechanism is delayed and threads help different operations complete their operation. The fast path follows the semantics of the MS Queue (the lock free implementation) and the slow path follows the semantics of the KP Queue (the wait free implementation).Figure5 below describes the Fast path Slow path approach.

Figure 12: The Fast path Slow path methodology In this approach, every thread tries to apply its operation on the fast path for a fixed number of trials, defined by a parameter MAX FAILURES. If it succeeds in applying its 5

This figure has been taken from [7]

8

operation on the fast path, it returns execution to the caller, otherwise it applies its operation of the slow path until it succeeds. Along with the state array, each thread also maintains a helping record, in which it stores the id of the currently executing thread and its phase number. Delayed helping is achieved by a parameter called HELPING DELAY. This parameter defines the number of times the thread tries to execute its operation without helping other concurrently running threads. This parameter is decremented on every unsuccessful attempt by the thread, and when it reaches zero, the thread begins helping the other thread. This delayed helping is done with the opportunistic idea that the currently executing thread will complete its operation without help from peers. Like in the KP Queue, every node has the enqTid field and this is used to synchronize between threads executing on the fast and slow paths.

4.1

Fast path

As evident from the Figure 12, every thread scans its helping record and state array to check if any others currently executing thread requires help. If not, it tries to apply its operation on the fast path. For the enqueue operation, every thread initializes a counter and increments it on every attempt on the fast path. The enqueue operation is very similar to the one described in the MS Queue [9], the only difference is the way in which the tail pointer is updated. For this approach, the enqTid field is set to -1 for threads executing on the fast path. So if the enqTid field, is -1, the tail is fixed as in the MS Queue, otherwise the thread was executed on the slow path and is fixed as explained in Section 4.2. The dequeue operation functions in a similar manner to the enqueue, it begins by checking if any other thread needs help in completing their operation. If not, the head reference is updated to point to the next node on the list. But this happens in a special way to provide synchronization between the fast and slow paths. The stamp associated with the head pointer is used6 . If the stamp has the value -1, the node was dequeued from the fast path, otherwise it was dequeued from the slow path and is explained in section 4.2.

4.2

Slow path

Every thread executing its enqueue operation on the slow path, writes its id into the enqTid field of the state array. The enqueue operation on the slow path is completed, by turning the pending flag off in the state array entry of this thread to indicate that the operation has been completed. Similarly, the dequeue operation follows similar semantics. The stamp of the head reference holds the id of the thread that is performing its dequeue operation on the slow path. This id is used to scan through the state array and set the pending flag off, to indicate that the dequeue operation has been completed.

4.3

Correctness

Linearizability and Wait Freedom are the two most important properties for wait-free algorithms. This paper provides a brief insight into the proof. The reader is referred to [7] for details regarding the correctness of this algorithm. 6

This algorithm has been implemented in Java that supports types like the AtomicStampedReference

9

• Linearizability: Every operation can be executed on the fast path or slow path and each of these paths define certain linearization points. An operation is said to take effect at these linearization points. For example, the enqueue operation is said to take effect when the tail points to the last node on the list. The reader is referred to [7] for detailed code analysis and linearization points for this algorithm. • Wait Freedom: The proof for wait freedom is two fold. To prove that an algorithm is wait free, it is first necessary to prove that it is lock free (non blocking). To prove that the algorithm is non blocking, if the conditions that check the head and tail pointers continually result in false for some thread, it means that another thread has successfully completed its operation. In other words, a thread fails to apply its operation only if another thread succeeds in applying its operation. It has been proven that every thread completes its operation in O(F+D.n2 ) steps. Here F stands for MAX FAILURES and D stands for the HELPING DELAY. The reader is referred to [7] for a formal proof.

5

Experimental Results

For the purpose of this project, this algorithm has been implemented in Java with MPJ to provide support for parallel computing. The algorithm was tested on the parallel computing environment the lambda network, maintained by the School of Computer Science at Carleton University, Ottawa. This algorithm has been tested on the lambda machines labeled lambda01 through lambda04 and evaluated based on the results obtained. All these machines are dual core machines having two processing cores each. Figure 13 shows the results obtained when the algorithm was executed on the lambda network with 16 threads concurrently performing their enqueue and dequeue operations. For this example, every thread first performs an enqueue operation, followed by a dequeue operation, with the enqueue and dequeue operations of different threads interleaved. In Figure 13, em F denotes the MAX FAILURES constant and H denotes the HELPING DELAY. Although these results are trivial, it is evident from the figure that, the performance of the algorithm improves as the constants F and H are increased.

Figure 13: Experimental Results of the Fast path Slow path approach with 16 threads The algorithm has been experimentally proved to be as scalable as the lock-free algorithm of Michael and Scott [9] when the constants are increased to a sufficiently large value. The algorithm is also efficient for massively parallel environments as illustrated in [7] 10

6

Conclusion

This project implements the fast path slow path approach for the FIFO queue. The basic idea is to execute operations on the fast path and switch to the slow path only when the contention in the system is high. As seen in the results, the performance of the algorithm can be improved by increasing the constants. It has been proven that the algorithm scales to be as efficient as the lock free version of the MS queue, when the constants are increased sufficiently, making wait-free implementations as efficient as lock-free implementations. An area of future work would be to execute the algorithm on massively parallel processors and evaluate its performance with the lock-free MS queue.

References [1] James H. Anderson and Yong-Jik Kim. A new fast path mechanism for mutual exclusion. Distributed Computing, 14:17–29, 2001. [2] Phong Chuong, Faith Ellen, and Vijaya Ramachandran. A universal construction for wait-free transaction friendly data structures. In Proc. ACM Symposium on Parallel Algorithms and Architectures, pages 164–164. IEEE Comp. Soc. Dig. Library, 2010. [3] Panagiota Fatourou and Nokolaos D. Kallimanis. A highly efficient wait-free universal construction. In Proc. ACM Symposium on Parallel Algorithms and Architectures, pages 325–334, 2011. [4] Maurice Herlihy. Wait-free synchronization. ACM Transactions on Programming Languages and Systems, 11(1):124–141, January 1991. [5] Maurice Herlihy. A methodology for implementing highly concurrent data objects. ACM Transactions on Programming Languages and Systems, 15(5):745–770, November 1993. [6] Alex Kogan and Erez Petrank. Wait-free queues with multiple enqueuers and dequeuers. In Proc. ACM Symposium on Principles and Practice of Parallel Programming, pages 223–234, 2011. [7] Alex Kogan and Erez Petrank. A methodology for creating fast wait-free data structures. In Proc. ACM Symposium on Principles and Practice of Parallel Programming, pages 141–150, 2012. [8] Leslie Lamport. A fast mutual exclusion algorithm. ACM Transactions on Computer Systems, 5(1):1–11, February 1987. [9] M. M. Michael and M. L. Scott. Simple, fast and practical non-blocking and blocking concurrent queue algorithms. In Proc. ACM Symposium on the Principles of Distributed Computing PODC, pages 267–275, 1996. [10] Mark Moir. Transparent support for wait-free transactions. In Proc. Conference on Distributed Computing, 1998. [11] S. Prakash, Y.H. Lee, and T. Johnson. A nonblocking algorithm for shared queues using compare-and-swap. Computers, IEEE Transactions on, 43(5):548–559, 1994. 11

[12] Shahar Timnat, Anastasia Braginsky, Alex Kogan, and Erez Petrank. Wait-free linked lists. In To appear in Proc. ACM Symposium on Principles and Practice of Parallel Programming, 2012. [13] J.D. Valois. Implementing lock-free queues. In Proceedings of the seventh international conference on Parallel and Distributed Computing Systems, pages 64–69. Citeseer, 1994. [14] Jae Heon Yang and James H. Anderson. A fast scalable mutual exclusion algorithm. Distributed Computing, 9:1–9, 1994.

12

Concurrent Lock-Free and Wait-Free Algorithms

Dec 10, 2012 - Abstract. The wide spread use of multicore and cluster configurations necessitates the need to efficiently achieve parallelism in all aspects of a system. This also necessitates the need to devise efficient and parallel data structures capable of performing operations concur- rently on shared memory.

648KB Sizes 2 Downloads 140 Views

Recommend Documents

Concurrent Lock-Free and Wait-Free Algorithms
School of Computer Science. Carleton University, Ottawa ... the state array. • Fix the internal structure of the queue .... Try to apply my op using. Fast path. (most X ...

Concurrent Lock-Free and Wait-Free Algorithms
COMP 5704 Project Presentation. Concurrent Lock-Free and Wait-Free. Algorithms. Medha Vasanth. School of Computer Science. Carleton University, Ottawa ...

Concurrent Reading and Writing - Leslie Lamport
process at a time can modify the data, but concurrent reading and ... unit of data might be an individual memory byte or a single disk track. ... of data, called digits, whose reading and writing are indivisible ..... U mailbox m. It is not hard to d

design for manufacturability and concurrent engineering pdf ...
design for manufacturability and concurrent engineering pdf. design for manufacturability and concurrent engineering pdf. Open. Extract. Open with. Sign In.

Concurrent programming
Page 9. 9. CMSC 15400. Three ways to create concurrent flows. Allow server to handle mul ple clients simultaneously. 1. ..... Single core laptop. 0. 1. 2. 3. 0 2 4 6 ...

Concurrent Stream Processing - GitHub
... to SQL and execute federated queries across data sources. ... where source data arrives in streams. .... or a single input pipe that is copied to two destinations.

Split-Voting in Taiwan's Concurrent Election and Referendum: An ...
Jun 1, 2010 - ... of Direct Democracy: Referendums in Global Perspective (Toronto: ...... variables (primary industries, attainment of college-level education, ...

Examination of the Factor Structure and Concurrent ...
The current article offers researchers a revised version of a mindfulness measure ..... factor methods model and last, an alternative one-factor mindfulness model ...

Examination of the Factor Structure and Concurrent ...
medical and psychological theories and treatment has led to ... factor analysis revealed the presence of a two-factor (mindfulness and mindlessness) solution. Study 2 .... Data Analysis ... when the sample size is large (Bollen, 1989).

Compositional Synthesis of Concurrent Systems ...
cient for a system designer, because if there is a counterexample, he/she needs ... As a better solution4 to Problem 1, we propose a compositional synthesis.

LITERATURE REVIEW: Concurrent Lock-Free and Wait ...
Oct 16, 2012 - Fetch&Add and the load-linked(LL)/store-conditional(SC) primitives. ... approach is not applicable for large data sets such as search trees.

concurrent programming in java design principles and patterns pdf ...
concurrent programming in java design principles and patterns pdf. concurrent programming in java design principles and patterns pdf. Open. Extract. Open with.

AKL+: A Concurrent Language Based on Object-Oriented and Logic ...
Introduction. AKL+ is a concurrent object-oriented language based on the concepts of classes, generic classes ... based object-oriented languages, either logic based languages extended with object- oriented constructs or .... For a formal definition

Split-Voting in Taiwan's Concurrent Election and Referendum: An ...
Jun 1, 2010 - KEYWORDS: referendum turnout; split voting; social context; Taiwan; eco- ... make up their minds how to vote in referendums according to the recom- ... 4Robert Huckfeldt and John Sprague, "Network in Context: The Social Flow of Politica

Genetic Algorithms and Artificial Life
In the 1950s and 1960s several computer scientists independently studied .... individual learning and species evolution a ect one another (e.g., 1, 2, 13, 37 ... In recent years, algorithms that have been termed \genetic algorithms" have ..... Bedau

Genetic Algorithms and Artificial Life
... in the population. 3. Apply selection and genetic operators (crossover and mutation) to the population to .... an environment|aspects that change too quickly for evolution to track genetically. Although ...... Princeton University Press, Princeto

Investigating Sensor Networks with Concurrent ... - IEEE Xplore
The background behind this demonstration is described as an one-page poster submission. The goal is to show a flow of tools for quick sensor network modeling, from an high level abstraction down to a system validation, including random network genera

concurrent-enrollment-1617-1.pdf
Conditions to receive financial aid for distance education courses within the University of HawaiÊ»i System: 1. Have a University of HawaiÊ»i System campus ...