IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

International Journal of Research in Information Technology (IJRIT)

www.ijrit.com

ISSN 2001-5569

Performance Evaluation of Grid Scheduling Strategies: A Case Study of the Epigenomic Application Riyadh A.K. Mehdi, Roba Bassam Alnajjar 1

Associate Professor, Information Technology College, Ajman University of Science & Technology Ajman, United Arab Emirates [email protected] 2

Lecturer, College of University Requirements, Ajman University of Science & Technology Ajman, United Arab Emirates [email protected]

Abstract The advancement in computer architectures, software systems, and network technologies has inspired the idea that a group of computing machines can be used collectively for solving large scale complex engineering and scientific problems that are characterized by having high degree of parallelism. Grid technology, which connects a number of personal computers, mainframes, and other computing devices with high speed networks, can achieve the same computing power as a supercomputer does and at a lower cost. However, a Grid is not a replacement of a supercomputer for applications where the tasks have to be performed sequentially. The Grid’s component that is responsible for achieving this goal is the Scheduler. Scheduling plays a critical role in the efficient and effective management of resources to achieve high performance. However, there is no single scheduling strategy that performs best for all Grid environments and applications. An alternative is to select the best scheduling algorithm to use in a particular Grid environment based on the application requirements and Grid configuration. This paper addresses scheduling algorithms used for parallel applications that can be represented by a Directed Acyclic Graph (DAG) in Grid computing systems. This paper investigates the schedules produced by three well-known scheduling algorithms in terms of their execution, transfer and scheduling times when applied to the Epigenomic application used to map an epigenomic state of human cells on a genome-wide scale. We assume that the algorithm that results in faster execution makespan is considered to be the best algorithm for this application. We show that the execution time can be reduced from days to hours if the appropriate algorithm is chosen. Experiments were performed using the GridSim simulator to evaluate the different algorithms and find the most suitable algorithm for the Epigenomic application. The results show that the effectiveness of the scheduling algorithm is based on the application characteristics. We found that, for the Epigenomic application, the most proper algorithm is the one that considers the entire workflow tasks and computes the schedule very fast because all the tasks of this application are interrelated and have small runtime and data transfer requirements.

Keywords: Grid computing, Grid; GridSim, Grid Scheduling, Epigemonic. 1. Introduction Scientists in many fields are developing large-scale, workflow applications for complex, dataintensive scientific analysis. These applications require the use of a large number of low-latency computational resources in order to produce results in a reasonable amount of time. The Grid is emerging as a new paradigm that enables the sharing, selection and aggregation of distributed resources for solving large-scale data intensive problems in science, engineering, and commerce [1]. A Grid can achieve the same computing power as a supercomputer does, but at a much reduced cost, a Grid can be thought of as a virtual supercomputer [1, 2]. Riyadh A.K. Mehdi,

IJRIT

500

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

Grid computing consists of geographically distributed and heterogeneous computational and storage resources that may belong to different administrative domains, but are shared among users by establishing global resource management architecture [1, 2]. A single site can simply no longer meet all the resource needs of today’s demanding applications, and using distributed resources can bring many benefits to application users. Grid applications that range from high-energy physics, gravitational-wave physics, geophysics, astronomy, biology to bioinformatics can be viewed as complex workflows that consist of various transformations performed on the data. For example, in astronomy, workflows with thousands of tasks are needed to identify galaxy clusters within the Sloan Digital Sky Survey [3]. Because of large amounts of computation and data involved, these workflows require the power of the Grid to execute. IBM describes Grid Computing indirectly by referring to its features: “Grid computing allows you to unite pools of servers, storage systems, and networks into a single large system so you can deliver the power of multiple-systems resources to a single user point for a specific purpose. To a user, data file, or an application, the system appears to be a single enormous virtual computing system.”[2] Although Grid technologies enable the sharing and utilization of widespread resources, the effectiveness of a Grid environment is largely dependent on the effectiveness and efficiency of its schedulers, which act as localized resource brokers [4]. In order to utilize the power of Grid computing completely, we need an efficient job scheduling algorithm to assign jobs to resources. Scheduling is the decision process by which application components are assigned to available resources to optimize various performance metrics. Scheduling aims at meeting user demands (e.g., in terms of cost, response–time) and the objectives represented by the resource providers (e.g., in terms of profit, resource utilization efficiency), while maintaining a good overall performance/ throughput for the Grid network. Scheduling in a Grid environment is more challenging because of the many clusters with different properties. So not only the processors are heterogeneous but also the communication variance is larger. Scheduling of tasks on heterogeneous Grid resources is, in general, NP-complete problem and hence it is quite unlikely that an efficient polynomial-time algorithm can be found [5]. Solutions based on an exhaustive search are impractical as the overhead of generating schedules is very high. In Grid environments, scheduling decisions must be made in the shortest time possible, because there are many users competing for resources, and time slots desired by one user could be taken up by another user at any moment. Therefore, to get the near optimal solution within finite duration, heuristics and meta-heuristics are used instead of exact optimization methods. In some real-world situations, the meta-heuristic methods are too difficult to apply or are inappropriate such as the case in fully automated systems where parameters cannot be tuned manually or where the execution time should be very short and thus using heuristics in such situations is not an appropriate solution [6, 7]. 2. Literature Survey and Previous Work Numerous heuristics has been proposed for scheduling problems described as a Directed Acyclic Graph (DAG) onto heterogeneous or homogenous computing environments [8, 9, 10]. Izakian et al. [7] proposed a heuristic for scheduling Grid tasks in a heterogeneous computing environment called min-max with the objective of minimizing makespan and flow-time. They compared it to five other heuristics that also attempts to minimize makespan and flow time. They concluded that their heuristic together with min-min is more effective than others for generating initial solutions in a simulated annealing. They also concluded that while the min-min heuristic is more suitable for homogenous environments, their heuristic is better suited for heterogeneous environments. They also concluded that while the min-min heuristic is best at minimizing flow-time, their heuristic is more efficient than the others in minimizing makespan. Braun et al. [11], have selected, adapted, implemented, and compared the relative performance of eleven static heuristics under a set of common assumptions. To facilitate these comparisons, they also assumed that each meta-task is a collection of independent tasks with no inter-task data dependencies. The mapping of the meta-tasks is being performed off-line, or in a predictive manner. The goal of this mapping is to minimize the total execution time of the meta-task, makespan. They concluded that the relatively simple

Riyadh A.K. Mehdi,

IJRIT

501

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

min-min heuristic performs well in comparison to other, more complex techniques investigated. However, genetic algorithms gave the best results with Min-min second. Wieczorek et al. [12] examined different existing approaches for scheduling of scientific workflow applications in a Grid environment. They evaluated three heuristic algorithms namely genetic algorithms, Heterogeneous-Earliest-Finish-Time (HEFT), and simple "myopic" and compared the incremental workflow partitioning against the full-graph scheduling strategy. They experimented with real-world scientific applications covering both balanced (symmetric) and unbalanced (asymmetric) workflows. They concluded that full-graph scheduling with the HEFT algorithm performs best compared to the other strategies examined in their paper. They also concluded that the Myopic algorithm can be considered as a just-in-time scheduling strategy, as the scheduling decisions made by the algorithm are optimized for the current time instance. This work focuses on the performance of some well- known heuristic job scheduling algorithms when applied to the Epigenomic application taking into consideration total completion time of jobs in a Grid environment. The remainder of the paper is organized as follows: Section two gives an overview of Grid architecture and scheduling strategies and algorithms. Section three presents Grid workflow model and the Epigenomic Grid application. Section four describes the simulation environment while section five presents the experimental results obtained from implementing the Epigemonic application with two sizes of workflows and different scheduling algorithms. Finally, section six discusses the conclusions of the simulation results and outlines future work. For more surveys of Grid scheduling algorithms, the reader is referred to references [6, 13, 14, 15, 16].

Fig. 1 Grid Scheduling Model

3. Grid Architecture and Scheduling A Grid system framework consists of four main components: user, a job scheduler, information server and grid resources included in different clusters, as shown in Figure 1. The user submits an application for execution on a Grid resource (site/cluster) which may belong to different administrative domain. Grid scheduling involves three main phases [13, 17, 18]: 

Phase one is Resource Discovery, which provides a list of available resources. The Grid Information Service (GIS) is responsible for collecting and predicting the resource information, such as CPU capacities, memory size, network bandwidth, software availabilities and the load of a resource in a particular period, and providing this information to Grid schedulers. The Globus Monitoring and Discovery System is an example of GIS [19].

Riyadh A.K. Mehdi,

IJRIT

502

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512





Phase two is Resource Scheduling, which involves the selection of feasible resources and the selection of the best assignment of jobs to resources. A job scheduler uses the information from GIS and task properties (e.g., approximate instruction quantity, memory and storage requirements, task dependency within a job and input data size) to make a feasible schedule. The third phase is Job Execution, which includes file staging and cleanup. The Launching and Monitoring (LM) module implements a schedule by submitting tasks to selected resources, staging input data and executables if necessary, and monitoring the execution of the tasks. An example of an LM is the Globus GRAM (Grid Resource Allocation and Management) [19]. A Local Resource Manager (LRM) has mainly two responsibilities. The first is local scheduling within a resource domain of remote jobs from external Grid users as well as jobs of local users, and the second is reporting resource information to the GIS. An LRM also collects local resource information with tools such as Network Weather Service [20] and reports the information to the GIS.

Fig. 2 Centralized Scheduling

3.1 Scheduling Paradigms A Grid schedule is the assignment of tasks to specific available time intervals of resources, such that no two tasks are on any resource at the same time interval, and the capacity of the resource is not exceeded by the tasks allocated to it where the goal is to optimize a given objective [17]. Hamscher et al. [21] present three scheduling paradigms: centralized, hierarchical and distributed. In a centralized scheduling environment, a central machine (node) acts as a resource manager that assigns jobs to all surrounding nodes that are part of the environment. This scheduling paradigm is often used in situations such as a computing centre where resources have similar characteristics and usage policies. Figure 2 shows the architecture of centralized scheduling.

Riyadh A.K. Mehdi,

IJRIT

503

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

Fig. 3 Distributed Scheduling One advantage of a centralized scheduling system is that the scheduler may produce better scheduling decisions because it has all necessary, and up-to-date, information about available resources. However, centralized scheduling obviously does not scale well with the increasing size of the environment that it manages. The scheduler itself may well become a bottleneck, and if there is a problem with the hardware or software of the scheduler’s server, i.e. a failure, it presents a single point of failure in the environment [21]. In distributed scheduling, there is no central scheduler responsible for managing all the jobs, see Figure 3. Instead, distributed scheduling involves multiple localized schedulers, which interact with each other in order to dispatch jobs to the participating nodes. Distributed scheduling overcomes scalability problems and provide fault tolerance capability as well as offer better reliability than a centralized scheduler. However, the lack of a global scheduler, which has all the necessary information on available resource, usually leads to sub-optimal scheduling decisions [21]. In hierarchical scheduling, a centralized scheduler interacts with local schedulers for jobs submissions. The centralized scheduler is a meta-scheduler that dispatches submitted jobs to local schedulers. Figure 4 shows the architecture of this paradigm. Compared with centralized scheduling, one advantage of hierarchical scheduling is that the global scheduler and the local scheduler can have different policies in scheduling jobs [21].

Fig. 4 Hierarchical Scheduling

3.2 Scheduling Algorithms A large number of scheduling algorithms were proposed in the literature for both homogeneous systems and heterogeneous systems. Some of them can be used in a Grid environment with some appropriate modifications [7, 11, 14]. These modifications are to meet some specific Grid circumstances, such as:    

Many users are competing for the Grid resources. Resources are not under the control of the scheduler Resources are heterogeneous and may not all perform identically for any given task Many workflow applications are data-intensive and large data sets are required to be transferred between multiple sites.

Most scheduling algorithms designed for use in a Grid environment use heuristics and were categorized into two types: batch mode and on-line mode [12]. This classification is based on at what time a task is mapped onto the machine. If the task is mapped onto the machine as soon as it arrives, it is classified as online mode. In batch mode mapping heuristics, the tasks are mapped at specifically prescheduled times, termed mapping events. The set of tasks mapped during a mapping event includes tasks that arrived after

Riyadh A.K. Mehdi,

IJRIT

504

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

the previous mapping event and tasks that were mapped during earlier mapping events, but did not begin their execution. Another classification of mapping heuristics is either static or dynamic. In static mapping heuristics, the matching and scheduling decisions are made before the execution of the application (compile time). In dynamic mapping heuristics, the matching and scheduling decisions are made at runtime. For cases when two tasks may have a data-dependency between themselves, static heuristics assume that the data-transfer times are known in advance. Static mapping usually does not imply overheads on the execution time of the mapped application, therefore, more complex mapping solutions than the dynamic ones can be adopted. The static approach is not well suited for a Grid environment, which is highly dynamic in nature [2, 22]. In general, there are four classes of heuristic scheduling algorithms for workflow applications: individual task scheduling, list scheduling, cluster scheduling, and duplication based scheduling [12, 14, 15, 20, 23]. In this work, we have chosen the Minimum Completion Time (MCT) scheduling algorithm from the individual task scheduling category. Min-Min and HEFT scheduling algorithms have been chosen from the list scheduling category. Both cluster based and duplication based scheduling algorithms focus on reducing communication delay among interdependent tasks. Since our application is not a communication intensive application, these algorithms were not considered.

3.3 Algorithms’ Computational complexity In general, heuristic based algorithms can produce a reasonable good solution in polynomial time [13]. Among the heuristic algorithms, individual task scheduling is the simplest. Sakellariou and Zhao showed that the Hybrid algorithms outperform the Min-Min and HEFT algorithms in most cases. However, computing task priorities in this algorithm has higher scheduling time. Table 1 lists the computational complexity for these algorithms [24]. In Table 1, v is the number of tasks in the workflow, m is the number of resources and g is the number of tasks for batch mode scheduling. HEFT has the highest complexity. All these algorithms have low computational complexities if compared with meta-heuristics algorithms such as genetic algorithms [24]. Table 1. Complexity of some heuristic algorithms Algorithm Complexity MCT O(vm) Min-Min O(vgm) Max-Min HEFT O(v2m) 4. Grid Workflow Model and the Epigenomic Grid Application A Grid application is represented as an abstract workflow using a DAG that describes the application components and the dependencies among these components as shown in Figure 5 without identifying the resources that will be used to perform the computations. In a DAG, the graph vertices represent the computing tasks and the graph edges represent the data dependencies between the tasks with weights on the vertices representing the estimated computational power required to finish that task and the weights on the edges representing the amount of data to be transferred between tasks. For example, the communication volume from t1 to t2 is 1 in Figure 5. Since the DAG scheduling problem is NP-complete [5], we rely on heuristics and meta-heuristics based scheduling strategies to achieve the most efficient possible solution. Tasks are released over time according to the precedence constraints. A task can start its execution only after all its dependencies have been satisfied. The USC Epigenome Center conducts research on the epigenetic state of human genomes. The Epigenomic workflow is based on the application used in that research. It takes DNA sequences, separate them into several chunks. For each chunk, several conversions, mappings and filters are applied independently from each other. Figure 6 shows the structure of a small Epigenomic workflow. This workflow performs the following operations: 

fastQSplit: the DNA sequence data is split into several chunks that can be operated on in parallel.

Riyadh A.K. Mehdi,

IJRIT

505

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

   

filterContams: noisy and contaminating sequences are filtered. Map: remaining sequences are mapped into correct locations in the genome. mapMerge: generates global map. mapIndex: identifies density sequence for each position in the genome.

The Epigenomic workflow structure is a balanced-structure that consists of several parallel pipelines, which require the same types of services but process different data sets. A simple way of improving performance is to run Epigenomic on Grid platform since many parts of this application can run in parallel.

Fig. 5 A workflow represented as DAG 5. SIMULATION ENVIRONMENT In order to provide performance comparisons, we use workflows from a parametric workflow generator that generates synthetic workflows of various scales that closely resemble those of real applications. These workflows were executed using the Pegasus workflow management system on the TeraGrid environment [25].

5.1 The Simulator We have used GridSim simulator to evaluate the Grid workflow scheduling techniques in different Grid environments. GridSim is a Java based simulator which works on top of a SimJava discrete event simulation framework [26]. The basic entities of GridSim are Grid users, the Grid information service (GIS), replica catalogue (RC), Grid resources, and brokers. A user submits his tasks to a Grid broker, which on behalf of the user finds the best match among resources and tasks and assigns the tasks to the available resources. The GIS entity maintains the complete characteristics of the resources and provides accurate information of resources to the broker [26].

5.2 GridSim Challenges In using GridSim, a number of difficulties were encountered. The most challenging problem is when an application has a workflow that consists of interdependent tasks. GridSim is intended for applications with independent tasks and does not support the dependencies. Problems faced and their solutions are as follows: 

Child Parent Relationship: Set the class type of each task with its parent ID, in order to consider the parent finish time while scheduling.

Riyadh A.K. Mehdi,

IJRIT

506

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512



   

Precedence Constraint: Set the start time of any task to be the finish time of its parent, and select the maximum between the machine ready time and the parent finish time, so if the machine is ready but the parent has not finished, the start time of the child is the parent finish time. Data transfer time: Add the transfer time to the task runtime. If the two tasks (parent and child) are executed on the same processor, then substitute the transfer time from the task’s runtime. Individual Task Execution: Store all the tasks when they arrive inside the remote resource, then starts scheduling to consider the entire workflow. Machine Heterogeneity: Create the machine with different characteristics, then change the code that supports homogeneous machines to consider these changes. Scheduling Algorithm: Extend the class AllocPolicy to write the required algorithm. GridSim only supports FCFS and Round Robin algorithms

Fig. 6. Epigenomic workflow

5.3 Simulated Grid model The simulated Grid environment consists of one cluster that consists of 8 machines. The system is heterogeneous in terms of CPU speeds, the speed of four machines is 2GHz and the other four have 3GHz, each machine has one processing element. In GridSim, the speed of each processing element (PE) is measured in Million Instructions Per Second (MIPS) rating. The input data-set is replicated across all the machines, with each machine having a single processor. The interconnection bandwidth between all the machines is 10 Gbits per second.

5.4 Application model An Epigenomic workflow was generated using a workflow generator [25]. This generator uses the information gathered from the actual executions of scientific workflows on the Grid. It produces workflows that resemble those of real workflow applications. In GridSim, jobs are packaged as Gridlets. These Gridlets, contain the job length in million seconds, the size of the job input and output data in bytes, along with various other execution related parameters when jobs move between the broker and resources. The job length is expressed in terms of the time it takes to run on a standard resource PE. Gridlets processing time is expressed in the same way. Two sets of experiments were carried out. In the first set, the size of the workflow is 24. In the next set, the workflow size is 500.

5.5 Additional Considerations In performing the simulations, the following assumptions were made:

Riyadh A.K. Mehdi,

IJRIT

507

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512



The replica catalog and the transformation catalog are not modeled, we assume that the data is replicated across all the locations so that algorithms to find the appropriate replica location as a data access location is not required. In addition, each processor meets the basic resource requirements of an individual task, such that all tasks can be executed on any machine so that a transformation catalog is not required.

Fig. 7 Makespan of Epigenomic size 24 

    

In order to be able to make sensible scheduling decisions, it is assumed that information about the estimated execution time of each task is available as well as estimates of the speed of the links connecting the machines. This information, used in conjunction with the amount of data that is required to be transferred before a task starts its execution, can provide an estimate about the earliest possible start time of a task whose parents have finished their execution. The execution time of any task is assumed to include the time to access the input data. The number of machines in the heterogeneous system is fixed The system is dedicated to the execution of the scheduled tasks graph. No other program is executed on the system while the scheduled tasks graph is being executed. The communication network is fully connected. Every processor can communicate directly with every other processor via a dedicated identical communication link. The workflow scheduling architecture is hierarchal in which each workflow submitted to the system is managed and scheduled by an individual broker.

Fig. 8 Transfer time of Epigenomic size 24    

A task can be executed only after all its predecessor tasks are completed and all the corresponding data files are available on the same execution site. Task preemption or task migration during execution is not considered. Once started, tasks run to completion. The goal is to minimize the total execution time of the application by taking advantage of parallel and distributed architectures. The minimum execution time heuristic is used. Each host can execute only a single task at any time, and the task can be mapped to only one host.

Riyadh A.K. Mehdi,

IJRIT

508

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

5.6 Performance Metrics The following two performance metrics are used to evaluate the effectiveness of the scheduling algorithms:  Makespan: The total execution time of the entire workflow application from start to finish. Makespan also includes the staging-in of the input files to the entry tasks and the staging-out of the results from the exit tasks.  Data Transfer Time: This is a measure of total time of data transfer between the machines. Data movement occurs when the child task is scheduled on a different machine. It consumes network resources and must be minimized.

Fig. 9 Makespan of Epigenomic size 500 6. EXPRERIMENTAL RESULTS

6.1 Workflow-24 (eight Machines) Execution of the Epigenomic workflow on a Grid platform showed significant advantages over a single machine. Figure 10 depicts the total makespan produced by executing Epigenomic size 24 using the three algorithms. The total makespan of the workflow decreased considerably from four hours in a single machine to approximately one hour on a Grid consisting of eight machines.

Fig. 10 Transfer time of Epigenomic size 500 HEFT generates schedules with up to 30% less makespan than MCT and Min-Min because HEFT considers the entire workflow while scheduling, it assigns higher priority to tasks on the critical path (longest path to the exit node). So these tasks are allocated first to the fastest processors reducing the makespan. We also found that, the scheduling policies that take into account the communication between tasks achieves better performance than policies that do not. Figure 8 shows the total transfer time among the individual tasks of the epigenomic application of size 24. As shown in Figure 8, HEFT achieves the lowest transfer time. Even though the MCT algorithm is the simplest algorithm, the time consumed in computing HEFT is worthwhile for decreasing the makespan.

Riyadh A.K. Mehdi,

IJRIT

509

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

6.2 Workflow-500 (50 Machines) Figures 9 and 10 repeats the experiment using an epigenomic application of size 500 and aGrid of 50 machine. HEFT still outperforms the other two algorithms with respect to makespan and transfer time. In fact, HEFT performs even better relative to the other two algorithms than in the previous experiment. The HEFT makespan is about 40% less than the makespan of Min-Min compared to a 33% in the previous experiment. The reason for this is that the bigger number of tasks, the more important it becomes to scheduling the critical tasks first and this where HEFT has the advantage over the other two algorithms.

6.3 Workflow-500 (120 Machines) The same experiments were repeated using more machines. Results indicate that using more machines makes the difference between the algorithms to be very small, because it does not matter if the critical tasks are executed first or not in this case, the number of available machines can accommodate the critical tasks and others in all three algorithms. Even though, HEFT still outperforms Min-Min by about 20%. Figure 11 shows the results for this experiment.

Fig. 11 Makespan of Epigenomic size 500 7. Conclusions Although the Grid provides access to a wide range of computing resources, the use of inappropriate scheduling algorithm may introduces many overheads and delays that make the Grid an inefficient platform for executing applications. In this paper, we compared the performance of three Grid scheduling algorithms by applying them on the Epigenomic Grid application. The results indicate that the most appropriate algorithm for this application is the algorithm that considers the entire workflow while scheduling where higher priority is assigned to tasks in the longest path leading to minimal processing time. Using the HEFT algorithm for Epigenmics-24 on eight machines reduces the runtime from four hours to one hour. This algorithm not only produces a smaller schedule but also it minimizes the data transfer time because it takes into consideration transfer time in generating a schedule. We observed also from the obtained results that increasing the number of machines reduces the difference in performance among the different scheduling algorithms with less execution time for each algorithm. Therefore, two issues must be considered while scheduling, the first is related to the application characteristics as number of tasks, their relationships, the amount of data transfer between the tasks, and the second issue is related to the execution environment such as the number of machines. Since the Epigenomic application consists of small runtime tasks and small amount of data transfer, the most proper algorithm for this application is the one that considers the entire workflow tasks and computes the optimal schedule very fast. In the future, we plan to study more scheduling algorithms, the meta-heuristics that are recognized with their efficient schedules but require very high computational time and to identify the characteristics of applications for which such scheduling algorithms can reduce run time significantly.

Riyadh A.K. Mehdi,

IJRIT

510

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

References [1] [2] [3] [4]

[5] [6] [7]

[8] [9]

[10]

[11]

[12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

[22]

[23]

I. Foster, C. Kesselman, The Grid2: Blueprint for a New Computing Infrastructure, Morgan Kaufmann, 2004, pp. 3–12. E. Kourpas E, Grid Computing: Past, Present and Future – An Innovation Perspective. IBM white paper, 2006. E. Deelman et al., GriPhyN and LIGO, Building a Virtual Data Grid for Gravitational Wave Scientists, Proceedings of 11th Intl Symposium on High Performance Distributed Computing, 2002. Raksha Sharma, Vishnu Kant Soni, Manoj Kumar Mishra, Prachet Bhuyan. A Survey of Job Scheduling and Resource Management in Grid Computing. World Academy of Science, Engineering and Technology 64 2010. David Fernández-Baca. Allocating modules to processors in a distributed system. IEEE Transactions on Software Engineering, November, 1989, 15(11): 1427-1436. Fangpeng Dong and Selim G. Akl. Scheduling algorithms for grid computing: State of the art and open problems. Technical Report 2006-504, Queen’s University, School of Computing, January 2006. Izakian Hesam, Abraham Ajith and Snasel Vaclav, Performance Comparison of Six Efficient Pure Heuristics for Scheduling Meta-Tasks on Heterogeneous Distributed Environments, Neural Network World, Volume 19, Issue 6, pp. 695-710, 2009. Rashmi Bajaj and Dharma P. Agrawal. Improving scheduling of tasks in a heterogeneous environment. IEEE Trans. Parallel Distrib. Syst., 15(2):107–118, 2004. H. Topcuoglu, S. Hariri, and M. Y. Wu. “Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing”, IEEE Transactions on Parallel and Distributed Systems, 13(3): 260-274, March 2002. Samantha Ranaweera and Dharma P. Agrawal. A scalable task duplication based scheduling algorithm for heterogeneous systems. In ICPP ’00: Proceedings of the Proceedings of the 2000 International Conference on Parallel Processing, page 383, Washington, DC, USA, 2000. IEEE Computer Society. T. D. Braun, H. J. Siegel, and N. Beck, A Comparison of Eleven static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Distributed Computing Systems, Journal of Parallel and Distributed Computing, 61:801-837, 2001. M. Wieczorek, R. Prodan, and T. Fahringer. Comparison of workflow scheduling strategies on the Grid. Lecture Notes In Computer Science, 2006. Y. ZHU, and LM Ni, “A survey of Grid Scheduling Systems”, Department of Computer Science, Hong Kong University of Science and Technology, 2003. H. Casanova et al., Heuristics for Scheduling Parameter Sweep Applications in Grid Environments, The 9th Heterogeneous Computing Workshop (HCW'00), April. 2000. Y. K. Kwok and I. Ahmad, Static Scheduling Algorithms for Allocating Directed Task Graphs to Multiprocessors, ACM Computing Surveys, 31(4):406-471, Dec. 1999. F. Berman et al., New Grid Scheduling and Rescheduling Methods in the GrADS Project, International Journal of Parallel Programming (IJPP), 33(2-3):209-229, 2005. Howard Jay Siegel and Shoukat Ali. Techniques for mapping tasks to machines in heterogeneous computing systems. Journal of Systems Architecture, 46(8):627–639, 2000. Jarek Nabrzyski, Jennifer M. Schopf & Jan Weglarz, Grid Resource Management – State of the art and Future trends, Kluwer Academic Publishers, 2003. Ferreira, L., Bieberstein, N., Berstis, V., Armstrong, J., “Introduction to Grid Computing with Globus”, IBM Redbooks, 2003. M. Wieczorek, R. Prodan, and T. Fahringer. “Scheduling of Scientific Workflows in the ASKALON Grid Environment”, ACM SIGMOD Record, 34(3):56-62, Sept. 2005. Hamscher, V., Schwiegelshohn, U., Streit, A. and Yahyapour, R. Evaluation of Job-Scheduling Strategies for Grid Computing. GRID 2000, 191–202, 17–20 December 2000, Bangalore, India. Lecture Notes in Computer Science, Springer-Verlag. M. Maheswaran, S. Ali, H.J.Siegel, D. Hensgen, and R. Freund. Dynamic Matching and Scheduling of a Class of Independent Tasks onto Heterogeneous Computing Systems. In 8th Heterogeneous Computing Workshop (HCW’99), Apr. 1999. T. Tannenbaum, D. Wright, K. Miller, and M. Livny, Condor – A Distributed Job Scheduler, Computing with Linux, The MIT Press, MA, USA, 2002.

Riyadh A.K. Mehdi,

IJRIT

511

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 500- 512

H. Zhao and R. Sakellariou, An experimental investigation into the rank function of the heterogeneous earliest finish time scheduling algorithm, Euro-Par 2003, pp. 189-194. [25] Kevin Lee, Norman W. Paton, Rizos Sakellariou, Ewa Deelman, Alvaro A. A. Fernandes, Gaurang Mehta. Adaptive Workflow Processing and Execution in Pegasus. In Concurrency and Computation: Practice and Experience. Volume 21, issue 16, 2009, pages 1965-1981. [26] R. Buyya and A. Sulistio. Gridsim: A grid simulation toolkit for resource modelling and application scheduling for parallel and distributed computing, 2006. [24]

Riyadh A.K. Mehdi,

IJRIT

512

Performance Evaluation of Grid Scheduling Strategies: A Case ... - IJRIT

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, ... problems that are characterized by having high degree of parallelism.

1MB Sizes 3 Downloads 259 Views

Recommend Documents

Performance Evaluation of Grid Scheduling Strategies: A Case ... - IJRIT
tasks are needed to identify galaxy clusters within the Sloan Digital Sky Survey [3]. Because of large amounts of computation and data involved, these workflows ...

Performance Evaluation of Equalization Techniques under ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue ... Introduction of wireless and 3G mobile technology has made it possible to ...

Performance Evaluation of Equalization Techniques under ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue ... Introduction of wireless and 3G mobile technology has made it possible to ...

Case Study of QoS Based Task Scheduling for Campus Grid
Also Grid computing is a system, which provides distributed services that integrates wide variety of resources with ... communication based jobs are like transfer a file from one system to another system and which require high ... Section 4 is relate

Case Study of QoS Based Task Scheduling for Campus Grid
Such Grids give support to the computational infrastructure. (access to computational and data ... Examples of Enterprise Grids are Sun Grid Engine, IBM. Grid, Oracle Grid' and HP Grid ... can be used. Here m represents the communicational types of t

Experimental Performance Evaluation of a ...
packets among SW MAC, HW MAC, and Host-PC. The HW. MAC writes the packets received from the PHY into the shared-memory using Direct Memory Access ...

Test Case Evaluation and Input Domain Reduction Strategies for the ...
Mar 4, 2009 - Test Case Evaluation and Input Domain Reduction Strategies ... The application of Evolutionary Algorithms to test data gen- ...... clear(int,int).

Timetable Scheduling using modified Clustering - IJRIT
timetable scheduling database that has the information regarding timeslots .... One for admin login, teacher registration, student registration and last one is exit.

Timetable Scheduling using modified Clustering - IJRIT
resources to objects being placed in space-time in such a way as to satisfy or .... timetable scheduling database that has the information regarding timeslots of college. .... Java is a computer programming language that is concurrent, class-based, o

Improvement in Performance Parameters of Image ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 6, ... Department of Computer Science and Engineering, Technocrats Institute of Technology ... Hierarchical Trees (SPIHT), Wavelet Difference Reduction (WDR), and ...

A Scheduling Method for Divisible Workload Problem in Grid ...
ing algorithms. Section 3 briefly describes our hetero- geneous computation platform. Section 4 introduces our dynamic scheduling methodology. Section 5 con-.

TEACHER PROFESSIONAL PERFORMANCE EVALUATION
Apr 12, 2016 - Principals are required to complete teacher evaluations in keeping with ... Certification of Teachers Regulation 3/99 (Amended A.R. 206/2001).

Re Scheduling of Screening and Evaluation of Head Teacher's ...
Re Scheduling of Screening and Evaluation of Head Teacher's Position.pdf. Re Scheduling of Screening and Evaluation of Head Teacher's Position.pdf. Open.

CDOT Performance Plan Annual Performance Evaluation 2017 ...
48 minutes Feb.: 61 minutes March: 25 minutes April: 44 minutes May: 45 minutes June: 128 minutes 147 minutes 130 minutes. Page 4 of 5. CDOT Performance Plan Annual Performance Evaluation 2017- FINAL.pdf. CDOT Performance Plan Annual Performance Eval

Design and performance evaluation of a parallel ...
The host eacecutes the sequential part of the BM proce- dure and drives the ... provide good quality solutions, but have the disadvan- tage of a very high ...

The performance evaluation of a local rabbit population
*Corresponding author e-mail: [email protected]. Abstract. The aim of ... gain (g), average daily gain (g/d), feed intake (g.w.w), feed conversion rate (g.w.w/l.w) and carcass characteristics ... growing period of about 30 days, a deterioration o

Performance evaluation of a subscriber database ... - Semantic Scholar
Jun 30, 2003 - The thesis studies the performance of one of the services of the subscriber database on three different ..... Memory is an example of such resource where the limiting factor is the resource contention alone. ...... clude single-thread

Modeling and performance evaluation of a flexure ...
analytical models are helpful for both a reliable architecture optimization and .... FEA simulation carried out in Section 5 via the ANSYS software package reveals ...

PERFORMANCE EVALUATION OF CURLED TEXTLINE ... - CiteSeerX
2German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany ... Curled textline segmentation is an active research field in camera-based ...

Performance Evaluation of Parallel Opportunistic Multihop ... - CiteSeerX
of the IEEE International Conference on Communications, Seattle,. WA, pp. 331-335 ... From August 2008 to April 2009, he was with Lumicomm Inc.,. Daejeon ...

CDOT Performance Plan Annual Performance Evaluation 2017 ...
84% 159% 160% 30% 61% 81%. 113%. (YTD) 100% 100%. Whoops! There was a problem loading this page. Retrying... Whoops! There was a problem loading this page. Retrying... CDOT Performance Plan Annual Performance Evaluation 2017- FINAL.pdf. CDOT Performa

Performance Evaluation of a Hybrid Algorithm for Collision Detection ...
are also approaches other than spatial partitioning data structures. ... from GPU memory is usually a very slow operation, making buffer ... data structures: grids and octrees. Finally, in ... partitioning the cells in a new grid when required (with.