Scheduling Dynamically Spawned Processes in MPI-2

Márcia C. Cera, Guilherme P. Pezzi, Maurício L. Pilla, Nicolas Maillard, Philippe O. A. Navaux {mccera, pezzi, pilla, nicolas, navaux}@inf.ufrgs.br Universidade Federal do Rio Grande do Sul - Brazil This work has been partially supported by HP Brazil

12th Workshop on Job Scheduling Strategies for Parallel Processing Saint-Malo, France - June 26, 2006

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Outline 1

Introduction

2

Programming with MPI-2

3

On-line Scheduling in LAM-MPI

4

Improving Process Scheduling in LAM-MPI

5

Conclusion

2/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Outline 1

Introduction Motivation

2

Programming with MPI-2

3

On-line Scheduling in LAM-MPI

4

Improving Process Scheduling in LAM-MPI

5

Conclusion

3/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Motivation MPI-2 Norm (1997) Dynamic creation of processes Remote Memory Access - RMA Parallel I/O

The norm does not define any way to schedule dynamic processes Which processor will receive a new process? Inside a processor, in which order will the processes run?

Goal Offer on-line scheduling to MPI-2 programs Aiming at load balance Number of processes created dynamically on each processor.

4/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Motivation MPI-2 Norm (1997) Dynamic creation of processes Remote Memory Access - RMA Parallel I/O

The norm does not define any way to schedule dynamic processes Which processor will receive a new process? Inside a processor, in which order will the processes run?

Goal Offer on-line scheduling to MPI-2 programs Aiming at load balance Number of processes created dynamically on each processor.

4/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Motivation MPI-2 Norm (1997) Dynamic creation of processes Remote Memory Access - RMA Parallel I/O

The norm does not define any way to schedule dynamic processes Which processor will receive a new process? Inside a processor, in which order will the processes run?

Goal Offer on-line scheduling to MPI-2 programs Aiming at load balance Number of processes created dynamically on each processor.

4/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Motivation MPI-2 Norm (1997) Dynamic creation of processes Remote Memory Access - RMA Parallel I/O

The norm does not define any way to schedule dynamic processes Which processor will receive a new process? Inside a processor, in which order will the processes run?

Goal Offer on-line scheduling to MPI-2 programs Aiming at load balance Number of processes created dynamically on each processor.

4/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

MPI Distributions and MPI-2 Some MPI distributions have started providing MPI-2 for a few years. LAM-MPI, since the 2000’s; Also implements some tools to manage the dynamic entry of resources and their exit (lamgrow/lamshrink); Based on daemons.

MPI-CH, since Jan., 2005. HP-MPI, since Dec. 2005.

MPI could almost be used on Grids: It needs support for heterogeneity (MPI-G2); It needs support for Fault-Tolerance (MPI-CHv2, MPI-FT).

Open-MPI is a merge of LAM and MPI-FT, which could bring everything in a single distribution!

5/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Outline 1

Introduction

2

Programming with MPI-2 Dynamic Creation of Processes

3

On-line Scheduling in LAM-MPI

4

Improving Process Scheduling in LAM-MPI

5

Conclusion

6/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Dynamic Creation of Processes MPI_Comm_spawn(char* command, char** argv, int maxprocs, MPI_Info info, int root, MPI_Comm comm, MPI_Comm *intercomm, int *errcodes)

7/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Dynamic Creation of Processes MPI_Comm_spawn(char* command, char** argv, int maxprocs, MPI_Info info, int root, MPI_Comm comm, MPI_Comm *intercomm, int *errcodes) Name of a MPI executable A multi-process program with MPI_Init and MPI_Finalize This executable will be the child program

Arguments of executable program Command line parameters of the child program

Number of processes that will execute the child program

7/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Dynamic Creation of Processes MPI_Comm_spawn(char* command, char** argv, int maxprocs, MPI_Info info, int root, MPI_Comm comm, MPI_Comm *intercomm, int *errcodes) Startup hints For resource allocation LAM-MPI offers MPI_Info keys set by MPI_Info_set lam_spawn_file - defines a file appschema with available nodes lam_spawn_sched_round_robin - uses the LAM nodes making a Round-Robin distribution that starts on a determined node MPI_INFO_NULL - makes a Round-Robin distribution start on node with the lowest rank

7/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Dynamic Creation of Processes MPI_Comm_spawn(char* command, char** argv, int maxprocs, MPI_Info info, int root, MPI_Comm comm, MPI_Comm *intercomm, int *errcodes) Parent intracommunicator Children discover this intracommunicator by MPI_Comm_get_parent

Child intercommunicator containing spawned processes By this intracommunicator the parent exchanges messages with its children

7/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The Fibonacci Example with MPI-2 MPI-2 Code of the "fibo.c" program: MPI_Comm_get_parent(&parent); if (n < 2) { MPI_Isend (&n, 1, MPI_LONG, 0, 1, parent, &req); } else{ sprintf (argv[0], "%ld", (n - 1)); MPI_Comm_spawn ("Fibo", argv, 1, local_info, myrank, MPI_COMM_SELF, &children_comm[0], errcodes); sprintf (argv[0], "%ld", (n - 2)); MPI_Comm_spawn ("Fibo", argv, 1, local_info, myrank, MPI_COMM_SELF, &children_comm[1], errcodes); MPI_Recv (&x, 1, MPI_LONG, MPI_ANY_SOURCE, 1, children_comm[0], MPI_STATUS_IGNORE); MPI_Recv (&y, 1, MPI_LONG, MPI_ANY_SOURCE, 1, children_comm[1], MPI_STATUS_IGNORE); fibn = x + y; MPI_Isend (&fibn, 1, MPI_LONG, 0, 1, parent, &req); } MPI_Finalize ();

8/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Outline 1

Introduction

2

Programming with MPI-2

3

On-line Scheduling in LAM-MPI How to Schedule the Spawned Processes

4

Improving Process Scheduling in LAM-MPI

5

Conclusion

9/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

How to Schedule the Spawned Processes LAM-MPI provides a Round-Robin mechanism It distributes maxprocs processes, one by node available MPI_Info_set(info, "lam_spawn_sched_round_robin", node)

spawn(3)

node

10/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

How to Schedule the Spawned Processes If there is a loop structure spawning processes spawn(3) spawn(3) spawn(3)

node

The distribution is not balanced - needs to know who has received processes In a distributed case the problem is even bigger

11/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

How to Schedule the Spawned Processes If there is a loop structure spawning processes spawn(3) spawn(3) spawn(3)

node

The distribution is not balanced - needs to know who has received processes In a distributed case the problem is even bigger

11/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

How to Schedule the Spawned Processes If there is a loop structure spawning processes spawn(3) spawn(3) spawn(3)

node

The distribution is not balanced - needs to know who has received processes In a distributed case the problem is even bigger

11/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

How to Schedule the Spawned Processes If there is a loop structure spawning processes spawn(3) spawn(3) spawn(3)

node

The distribution is not balanced - needs to know who has received processes In a distributed case the problem is even bigger

11/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

How to Schedule the Spawned Processes If there is a loop structure spawning processes spawn(3) spawn(3) spawn(3)

node

The distribution is not balanced - needs to know who has received processes In a distributed case the problem is even bigger

11/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

How to Schedule the Spawned Processes If there is a loop structure spawning processes spawn(3) spawn(3) spawn(3)

node

The distribution is not balanced - needs to know who has received processes In a distributed case the problem is even bigger

11/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Experimental Tests Spawning 20 processes on 5 nodes using single and multiple spawn calls with LAM Round-Robin mechanism

Environment 20 spawns of 1 process 1 spawn of 20 processes

Node 1 20 4

Node 2 0 4

Node 3 0 4

Node 4 0 4

Node 5 0 4

12/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Experimental Tests Spawning 20 processes on 5 nodes using single and multiple spawn calls with LAM Round-Robin mechanism

Environment 20 spawns of 1 process 1 spawn of 20 processes

Node 1 20 4

Node 2 0 4

Node 3 0 4

Node 4 0 4

Node 5 0 4

12/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Outline 1

Introduction

2

Programming with MPI-2

3

On-line Scheduling in LAM-MPI

4

Improving Process Scheduling in LAM-MPI Balancing the Load of MPI-2 Programs

5

Conclusion

13/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The Proposed Scheduler A centralized deamon receives messages from re-defined MPI-2 primitives and takes the scheduling decisions The usage of pre-compilation redefinition is simple and portable. the end-user just has to re-compile his program.

14/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The re-defined MPI_Comm_spawn Re-definition of the MPI_Comm_spawn

scheduler 3

2

1 MPI_Comm_spawn 1 0

1

MPI_Comm_spawn call

2

Notification of processes creation

3

Task graph is updated and physical location is decided

4

Physical location is returned to parent

5

The child is physically spawned

4

1 0

5

15/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The re-defined MPI_Comm_spawn Re-definition of the MPI_Comm_spawn

scheduler 3

2

1 MPI_Comm_spawn 1 0

1

MPI_Comm_spawn call

2

Notification of processes creation

3

Task graph is updated and physical location is decided

4

Physical location is returned to parent

5

The child is physically spawned

4

1 0

5

15/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The re-defined MPI_Comm_spawn Re-definition of the MPI_Comm_spawn

scheduler 3

2

1 MPI_Comm_spawn 1 0

1

MPI_Comm_spawn call

2

Notification of processes creation

3

Task graph is updated and physical location is decided

4

Physical location is returned to parent

5

The child is physically spawned

4

1 0

5

15/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The re-defined MPI_Comm_spawn Re-definition of the MPI_Comm_spawn

scheduler 3

2

1 MPI_Comm_spawn 1 0

1

MPI_Comm_spawn call

2

Notification of processes creation

3

Task graph is updated and physical location is decided

4

Physical location is returned to parent

5

The child is physically spawned

4

1 0

5

15/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The re-defined MPI_Comm_spawn Re-definition of the MPI_Comm_spawn

scheduler 3

2

1 MPI_Comm_spawn 1 0

1

MPI_Comm_spawn call

2

Notification of processes creation

3

Task graph is updated and physical location is decided

4

Physical location is returned to parent

5

The child is physically spawned

4

1 0

5

15/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The re-defined MPI_Finalize Re-definition of the MPI_Finalize

scheduler 3

2

1 MPI_Finalize 1 0

1

MPI_Finalize call

2

Notification of processes completion

3

Task graph is updated

4

Unblock a process if there is any

1 0

4

16/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The re-defined MPI_Finalize Re-definition of the MPI_Finalize

scheduler 3

2

1 MPI_Finalize 1 0

1

MPI_Finalize call

2

Notification of processes completion

3

Task graph is updated

4

Unblock a process if there is any

1 0

4

16/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The re-defined MPI_Finalize Re-definition of the MPI_Finalize

scheduler 3

2

1 MPI_Finalize 1 0

1

MPI_Finalize call

2

Notification of processes completion

3

Task graph is updated

4

Unblock a process if there is any

1 0

4

16/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The re-defined MPI_Finalize Re-definition of the MPI_Finalize

scheduler 3

2

1 MPI_Finalize 1 0

1

MPI_Finalize call

2

Notification of processes completion

3

Task graph is updated

4

Unblock a process if there is any

1 0

4

16/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Scheduling Heuristics Scheduling heuristics can be applied in two levels: Scheduling processes into resources Priorizing the execution of ready processes into a resource (actually it is left under OS responsibility)

The scheduler implements our Round-Robin mechanism The scheduler knows the last resource used new_resource = (last_resource + 1)%total_resources

Hierarchical execution – new processes have high priority Blocking mechanism - parent waits for the execution of its children Programs Divide & Conquer! no deadlocks 17/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The Fibonacci Test-case Comparing different schedules: number of processes spawned on each node

Environment fib(6) with LAM standard scheduler fib(6) with embedded scheduler fib(6) with proposed scheduler

Node 1 25 8 5

Node 2 0 4 5

Node 3 0 8 5

Node 4 0 2 5

Node 5 0 3 5

18/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The Fibonacci Test-case Comparing different schedules: number of processes spawned on each node

Environment fib(6) with LAM standard scheduler fib(6) with embedded scheduler fib(6) with proposed scheduler

Node 1 25 8 5

Node 2 0 4 5

Node 3 0 8 5

Node 4 0 2 5

Node 5 0 3 5

Sets always the same first Round-Robin node

18/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The Fibonacci Test-case Comparing different schedules: number of processes spawned on each node

Environment fib(6) with LAM standard scheduler fib(6) with embedded scheduler fib(6) with proposed scheduler

Node 1 25 8 5

Node 2 0 4 5

Node 3 0 8 5

Node 4 0 2 5

Node 5 0 3 5

Sets always the same first Round-Robin node Sets the first Round-Robin node to neighbor

18/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The Fibonacci Test-case Comparing different schedules: number of processes spawned on each node

Environment fib(6) with LAM standard scheduler fib(6) with embedded scheduler fib(6) with proposed scheduler

Node 1 25 8 5

Node 2 0 4 5

Node 3 0 8 5

Node 4 0 2 5

Node 5 0 3 5

Sets always the same first Round-Robin node Sets the first Round-Robin node to neighbor Sets the first Round-Robin node as global information

18/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The Fibonacci Test-case LAM-MPI imposes a file description limitation Restrictions about the maximum number of processes running in a resource The Round-Robin mechanism proposed makes it possible to compute the 13th Fibonacci number

19/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

The Fibonacci Test-case LAM-MPI imposes a file description limitation Restrictions about the maximum number of processes running in a resource The Round-Robin mechanism proposed makes it possible to compute the 13th Fibonacci number

fib(13)

Node 1 151

Node 2 151

Node 3 151

Node 4 150

Node 5 150

Total Number of Processes 753

19/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

2nd Test-Case: an Irregular Computation Computation of prime numbers in a recursive search – like Fibonacci program but irregular This program is CPU-intentive and a good load balance should impact the running time. Intervals range between 1 and 20 millions

Environment LAM standard scheduler proposed scheduler

Node 1 39 8

Node 2 0 8

Node 3 0 8

Node 4 0 8

Node 5 0 7

20/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

2nd Test-Case: an Irregular Computation Computation of prime numbers in a recursive search – like Fibonacci program but irregular This program is CPU-intentive and a good load balance should impact the running time. Intervals range between 1 and 20 millions Environment LAM standard scheduler proposed scheduler

Node 1 39 8

Node 2 0 8

Node 3 0 8

Node 4 0 8

Node 5 0 7

Good load balance with proposed scheduler 181.15s with LAM standard scheduler 46.12s with proposed scheduler 20/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Outline 1

Introduction

2

Programming with MPI-2

3

On-line Scheduling in LAM-MPI

4

Improving Process Scheduling in LAM-MPI

5

Conclusion

21/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Conclusion This work aims to simplify the on-line scheduling of MPI-2 programs Interest for dynamic platforms!

The native LAM implementation is not efficient, due to a simple scheduling strategy. A simple prototype led to clear improvements

22/24

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion

Next Steps To implement the proposed scheduler inside of LAM-MPI distribution To make load balance according to the information about resource usage M. C. Cera et al. Improving the dynamic creation of processes in MPI-2, accepted to publication in 13th European PVMMPI Users Group Meeting, set, 2006.

Tests with real-world applications Branch & Bound, linear systems solving, etc...

To distribute the centralized scheduler To implement a workstealing strategy for the distributed scheduler

23/24

Scheduling Dynamically Spawned Processes in MPI-2

Márcia C. Cera, Guilherme P. Pezzi, Maurício L. Pilla, Nicolas Maillard, Philippe O. A. Navaux {mccera, pezzi, pilla, nicolas, navaux}@inf.ufrgs.br Universidade Federal do Rio Grande do Sul - Brazil This work has been partially supported by HP Brazil

12th Workshop on Job Scheduling Strategies for Parallel Processing Saint-Malo, France - June 26, 2006

Scheduling Dynamically Spawned Processes in MPI-2

Introduction Programming with MPI-2 On-line Scheduling in LAM-MPI Improving Process Scheduling in LAM-MPI Conclusion. Outline. 1. Introduction. 2.

665KB Sizes 1 Downloads 117 Views

Recommend Documents

Finding term equivalents in dynamically-built aligned ...
degree of polysemy than others. For example ... règle rectifiée automotive industry ..... terms taken from the GDT with varied degrees of polysemy. The more ...

Dynamically Scaling Applications in the Cloud
on service horizontal scaling (i.e. adding new server replicas and load balancers to ... rent status on platform scalability and challenges ahead for the next ..... go in private or hybrid clouds in which application components can be placed in publi

Predictive Resource Scheduling in Computational ... - Semantic Scholar
Department of Computer Science ... started to adopt Grid computing techniques and infrastruc- ..... dependently and with minimal input from site providers is.

Predictive Resource Scheduling in Computational ... - Semantic Scholar
been paid to grid scheduling and load balancing techniques to reduce job waiting ... implementation for a predictive grid scheduling framework which relies on ...

Minimum-Latency Aggregation Scheduling in ...
... networks. To the best of our knowledge, none of existing research works have addressed ... search works on underwater networks have addressed the ..... information into underwater acoustic sensor coverage estimation in estu- aries”, In ...

thread progress equalization: dynamically adaptive ...
TPEq implemented in OS and invoked on a timer interrupt. • Optimal configuration determined and control passed back to user code. Power Budget: 80W. 3.

Towards Flexible Evolution of Dynamically Adaptive Systems
to Dynamically Adaptive Systems (DAS). DAS can be seen as open distributed systems that have the faculty to adapt themselves to the ongoing circumstances ...

Dynamically consistent optical flow estimation - Irisa
icate situations (such as the absence of data) which are not well managed with usual ... variational data assimilation [17] . ..... pean Community through the IST FET Open FLUID Project .... on Art. Int., pages 674–679, Vancouver, Canada, 1981.

Dynamically Allocating the Resources Using Virtual Machines
Abstract-Cloud computing become an emerging technology which will has a significant impact on IT ... with the help of parallel processing using different types of scheduling heuristic. In this paper we realize such ... business software and data are

Requirements Elicitation in Agile Processes
According to 75% of the Agile companies and 63% of the .... TravelAssist and other company). Requirements ... Parent Document: Proc 2009 Agile Conference.

Dynamical Processes in Complex Networks
3.4.1 The degree distribution cutoff . ..... On the other hand, computer science has focused on ... Further, all species do not spend the entire year in the area.

Spawned with a silver spoon? Entrepreneurial ...
Sep 22, 2008 - nal incentive structure, as modeled by Hellmann. (2002), will influence the ...... from Medicare to reimburse the use of a medical device is also ...

Dynamical Processes in Complex Networks
UNIVERSITAT POLIT`ECNICA DE CATALUNYA. Departament de F´ısica i Enginyeria Nuclear. PhD Thesis. Michele Catanzaro. Dynamical Processes in Complex. Networks. Advisor: Dr. Romualdo Pastor-Satorras. 2008 ...

Synchronization processes in complex networks
Nov 7, 2006 - of biological and social organizations (see [6] and references therein). ... study of synchronization processes and complex networks .... Page 5 ...

Synchronization processes in complex networks
Nov 7, 2006 - is to determine certain properties of individual nodes (degree, centrality ..... [21] A. Arenas, A. Diaz-Guilera, C.J. Perez-Vicente, Phys. Rev. Lett.

Synthesis and Decomposition of Processes in Organizations.
Edwin L. Cox School of Business, Southern Methodist University, Dallas, Texas ... Owen Graduate School of Management, Vanderbilt University, Nashville, ...

Dynamical Processes in Complex Networks
1.4.5 Data sets and their limitations . . . . . . . . . . . . . . . . . . . 40. 1.5 Conclusions: A ...... Despite the massive vaccination campaigns that follow-up briefly after the ...

radiative processes in astrophysics pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. radiative ...

Stochastic slowdown in evolutionary processes
Jul 28, 2010 - starting from any i, obeys the master equation 6 . Pi. N t = 1 − Ti. + − Ti .... Color online The conditional mean exit time 1. N /1. N 0 ..... Financial support by the Emmy-Noether program of the .... University Press, New York, 2

Insertion Scheduling: An Alternative to List Scheduling ...
constraints between the tasks, build a schedule which satisfies the precedence and the ... reservation vector, for the conditional stores of the DEC Alpha 21064.