Sharing Checkpoints to Improve Turnaround Time in Desktop Grid Computing* Patricio Domingues

João G. Silva

Luis Silva

School of Technology and Management Polytechnic Institute of Leiria, Portugal

Dep. Engenharia Informática Univ. Coimbra, Portugal

[email protected]

[email protected]

Dep. Engenharia Informática Univ. Coimbra, Portugal [email protected]

Abstract In this paper, we present a checkpoint sharing methodology to improve turnaround time of applications run over desktop grid environments. In fact, volatility of desktop grid nodes reduces the efficiency of such environments when a fast turnaround time is sought, since a task might get stalled if its assigned machine remains unavailable for a somewhat long period of time (long at the scale of the computation). The rationale behind our approach is to permit checkpoint reuse, so that when a computation is forced to move from one node to another, it can be restarted from an intermediary point provided by the last saved checkpoint. We study the effects of sharing checkpoints in application turnaround time simulating three scheduling algorithms based on First Come First Served: FCFS, FCFS-AT and FCFS-TR. The targeted environment consists of institutional desktop grids. Our results show that sharing checkpoints is particularly effective in volatile environments, yielding performance improvement up to three times relatively to schemes based on private checkpoints. Furthermore, for non-volatile environments, a simple timeout strategy produces good results. Keywords: desktop grid, turnaround time, checkpoint, trace-based simulation.

1. Introduction Over the last few years, usage of desktop grid systems have gained momentum and visibility, generating volunteer computing projects like SETI@home [1] and many other @home projects [2]. The impressive performance figures of the major volunteer computing projects, such as SETI@home, self-credited with more than 65 TFlops, as of *

September 2005, clearly demonstrate the effectiveness of harvesting cycles over the internet. The attractiveness of exploiting desktop grid systems is further reinforced by the fact that costs are highly distributed: every volunteer supports her resources (hardware, power costs and internet connections) while the benefited entity provides management infrastructures, namely network bandwidth, servers and management services, receiving in exchange a massive and otherwise unaffordable computing power. The usefulness of desktop grid computing is not limited to major high throughput public computing projects. Many institutions, ranging from academics to enterprises, hold vast number of desktop machines and could benefit from exploiting the idle cycles of their local machines. In fact, several studies confirm that CPU idleness in desktop machines averages 95% [3, 4]. Also, the availability of several desktop grid platforms have smoothened the setup, management and exploitation of desktop grid systems. Indeed, the potential gains of harvesting idle resources have fostered the development of desktop grid middleware. Currently, several platforms exist ranging from academic projects such as BOINC [5], XtremWeb [6] and Alchemi [7] to commercial solutions like Entropia [8] and United Devices [9]. This plethora of desktop grid has contributed for the proliferation of new desktop grids and related projects, not only over the internet but also at an institutional level, like in the case of an academic campus. The typical and most appropriate application for desktop grid is comprised of independent tasks (no communication exists amongst tasks) with a high computation to communication ratio. The execution of the application is orchestrated by a central scheduler node which distributes the tasks amongst the worker nodes and awaits workers’ results. It is important to

This research work is carried out in part under CoreGRID funded by the European Commission (Contract IST-2002-004265).

note that an application only finishes when all tasks have been completed. The main difference in the usage of institutional desktop grids relatively to public ones lies in the dimension of the application that can be tackled. In fact, while public projects usually embrace large applications made up of a huge number of tasks, institutional desktop grids, which are much more limited in resources, are more suited for modestly-sized applications. So, whereas in public volunteer projects importance is on the number of tasks carried out per time unit (throughput), users of institutional desktop grids are normally more interested in a fast execution of their applications, seeking fast turnaround time. Desktop grids are highly volatile environments. For instance, an individual machine might be shutdown, disconnected from the network or simply detached to the desktop grid system without prior warning. Failures of volunteer resources can be classified into two broad classes: interference failures and volatility failures [10]. The former emerges from the volunteer nature of resources which are under control of their respective owners. The latter include network outages and crashes that render the resources inaccessible. Checkpointing is a usual fault-tolerance mechanism to deal with failures. It consists in periodically saving the state of the executing task to stable storage, usually the executing machine’s local disk. Whenever, the execution recovers from a failure, the last stable checkpoint (a checkpoint can get corrupted, for instance, if a failure occurs during checkpointing) can be used to resume the execution, reducing the prejudices of the failure. Two main types of checkpoint exist: system-level and user-level. The former relies on operating system mechanisms to take a full snapshot of the target process. While it is transparent to the user, it usually generates huge checkpoint files since the whole process image needs to be saved. It also requires support from the operating system (a support that does not exist for instance on Windows) and saved checkpoints are non-portable across operating systems and platforms. On the other hand, user-level checkpointing is application specific and is non-transparent since it requires the involvement of the application programmer. However, the application programmer can select only the data and states deemed relevant, yielding a much lighter checkpoint. Moreover, if appropriate care is taken in data representation, checkpoints can be used to resume applications across heterogeneous platforms. Apart from Condor which supports system level checkpoint

[11] desktop grids middleware like BOINC and XtremWeb resort to user-level checkpoint. A usual limitation of volunteer computing is that checkpoints are private, in the sense that a checkpoint taken in a given machine will only be used to resume the application in that machine. In this study, we explore the advantages of sharing checkpoints in a desktop grid environment for the purpose of optimising turnaround time extending the private-checkpoint model to a shared one. Under our approach, portable checkpoints are saved in a central storage and can be used for restoring, moving or replicating tasks to other machines. The remainder of this paper is organized as follows. Section 2 outlines the general characteristics of the considered desktop grid environments and presents the scheduling methodologies we purpose to study. Section 3 details the simulated environment and section 4 presents the main results. In Section 5 we describe related work. Finally, section 6 concludes the paper and presents possible paths for future work.

2. Computing Scheduling Policies

Environment

and

2.1 Computing environment This study focuses on institutional desktop grids comprised of ownerless machines, as found in academic classrooms. The ownerless designation comprehends machines that are not assigned to any individual in particular, contrary to office machines which normally are under the control of a given user. For the purpose of resource harvesting, we consider that the machines are partitioned in sets, with a given set assigned to a harvesting-user for a fixed period of time. This model corresponds to a time-partitioned management of the machines for resource harvesting. For instance, considering an academic environment, a representative example would be a user having permission to harvest, for a whole day, the machines of a given number of classrooms. Note that resource harvesting is a non-priority activity, so that interactive users have full precedence over any harvesting task. In this study, the emphasis for the use of harvesting resources is clearly on pursuing a fast turnaround time, so that the scheduling methodology might trade resources for achieving a faster execution time. For instance, if deemed appropriate, a task can be replicated among multiple machines, even if this means that when the first replicated instance of a task terminates, the computation performed by the others replicas will be discarded and thus the computing power effectively wasted.

A further assumption is the existence of a mechanism that can periodically provide, for a given set of machines, a list indicating the reachable and non-reachable machines of the set. This information can be gathered by a simple heartbeat mechanism like Ganglia [12], and thus it is in no way restrictive of the type of considered environments. The periodic knowledge of the available and unavailable machines is useful for scheduling purpose, especially for quickly reacting to machine failure. In fact, early detection of an unavailable machine permits to promptly reschedule its task to another machine. Our task distribution follows the normal model for volunteer computing, in which a central scheduler distributes tasks whenever it receives a volunteer machine’s request for task. Besides it implicit declaration of availability for processing a task, a request for task includes the possible limitations set by the machine (e.g. maximum physical memory made available for foreign jobs) as well as performance metrics that permit to rank the requesting machine relatively to the other machines.

2.2 Scheduling policies In this study, we devised and assessed the following types of scheduling algorithms: - First come first served (FCFS) - First Come First Served with Adaptive Timeout (FCFS-AT) - First Come First Served with Task Replication (FCFS-TR) Besides their own specific characteristics, all enumerated scheduling policies make use of the shared checkpoint mechanism to resume interrupted tasks. So, whenever an executing machine fails to report, its assigned task returns to the “available for submission” state. It can then be assigned to a volunteer machine that requests foreign work, and resumed from the last available checkpoint.

First Come First Served (FCFS). FCFS is the classical eager and simple scheduling algorithm, where a task is delivered to the first worker that requests it. This scheduling policy is particularly appropriate when high throughput computing is sought and thus it is commonly used on the major volunteer grid desktop projects. Note, that in the case of simultaneous requests, tasks are assigned accordingly to the performance rank of the requesting machines (fastest machine is the first one served, and so on).

First Come First Served with Adaptive Timeout (FCFS-AT). The FCFS-AT policy extends

FCFS with the concept of an adaptive timeout. Upon assigning a task to a requesting worker, the scheduler defines a maximum time-to-execute, for the given pair task/machine. This execution timeout is adapted to the requesting machine, being defined on the machine computing performance and on the CPU time still needed to complete the task. If the timeout expires before the task has been completed, the scheduler reintegrates the task into the pool of available tasks, so it can be assigned to another machine.

First Come First Served with Task Replication (FCFS-TR). FCFS-TR is based upon FCFS-AT (it also incorporates adaptive timeout) further extended with the capability of replicating tasks. Specifically, this policy starts to replicate tasks whenever all uncompleted tasks are already assigned and there is at least one worker machine without assigned task. The rationale behind this policy is that if a task is replicated to a fastest machine than the one it is being currently executed, it will probably finish sooner (probably, since the machines are volunteer and might become unavailable at any time) and thus might permit a fastest termination of the application.

3. Experiments Due to the impracticality of conducting repeatable experiments in real desktop-grid platforms, we resorted to simulation to carry out performance analysis of the proposed algorithms. For that purpose we developed the trace-driven DGSchedSim simulator [13]. DGSchedSim permits simulating the behaviour of user-defined scheduling policies over user-defined desktop grid environments. Additionally, the simulator allows the definition of the characteristics of the machines (performance, available resources).

3.1 Simulated environments To assess the scheduling algorithms several environments were simulated. Regarding machines, four different types were considered, as described in Table 1. With these machines, two different sets, each with 32 machines, were formed. The sets are labelled “Fast” and “Balanced”. As the name implies, Fast is comprised of 32 fastest machines (type D), while Balanced holds 8 machines of every type described in Table 1. The column INTFP refers to a performance metric derived from the benchmark Bytemark [14]. The metric combines the integer and floating-point performances of a machine in a single numeric index, with higher values expressing higher performance. The INTFP metric is used by the DGSchedSim simulator to

rank the machines and accordingly compute the time needed for processing a given task. Type A B C D Avg.

CPU PIII@650 MHz [email protected] GHz [email protected] GHz [email protected] GHz --

INTFP 12.952 21.533 37.791 42.320 28.649

Ratio ref. 0.518 0.861 1.511 1.692 1.146

Table 1: Characteristics of simulated machines

3.2 Traces All simulations were conducted over a trace collected with the Distributed Data Collector utility [15], from two classrooms of an academic institution. The trace was collected with a two-minute period, meaning that every two minutes a sample was recorded from every machine. Each classroom had 16 Windows 2000 machines, and remained open from 8 am to next day’s 4 am on weekdays, and on Saturdays from 9 am until 9 pm, being closed on Sundays. The trace gathers 35 consecutive days over a period with classes. Apart from classes, the machines are used by students for performing their practical assignments (word processing, programming and alike), for services like e-mail and for browsing the internet. To assess the computing power of the scenarios considered in this study we used the average cluster equivalence ratio (CER). CER as defined by Kondo et al [16] roughly measures the number of cluster machines (i.e. fully dedicated machines) needed to achieve an equivalent computing power as the one delivered by the set of observed machines. The CER for the “Fast”set is 0.53 (0.57 for weekdays, 0.44 for weekends) and 0.52 (0.56 for weekdays, 0.43 for weekends) for the “Balanced” set. The differences between weekdays and weekends derive from the fact that, on weekends the average number of powered on machines is lower than on weekdays.

4. Results We simulated embarrassingly parallel applications comprised of 50 and 75 tasks, with individual tasks requiring 3600 and 7200 seconds of reference machine’s CPU time. The reference machine was a Pentium 4/1.6 GHz with an INTFP index of 25.008. So, considering, for example a type D machine which has a 42.320 INTFP index, a task defined as requiring 3600 seconds of CPU time of the reference machine would need slightly more than 2383 seconds of CPU time when carried out in a type C machine. To sort out the effects of checkpointing over the execution times, checkpoint frequencies ranging from 5% to 50% were simulated, besides execution without

checkpoint. A 5% frequency corresponds to 9 checkpoints over the whole execution (one checkpoint at every 5% step of the execution) and 50% to a single-checkpoint (at mid-execution). For all simulations, the checkpoint size was set to 1 MB. Since the classrooms have reduced activities on weekends (besides being closed on Sundays, the number of users is reduced on Saturdays) and thus lower volatility, we split the simulations on two groups: weekdays and weekends. To prevent results biased by using only specific portions of trace, all simulations were run multiple times from different starting points, with the reported turnaround times corresponding to the mean average of the multiple executions. Specifically, for every scheduling policy assessed, the weekday simulations were run 17 times and the weekend simulations executed 12 times. To permit a meaningful assessment of the scheduling policies, the results are reported through the ratio of the turnaround time of the application relatively to the Ideal Execution Time (IET), with smaller values indicating better performance. IET corresponds to the theoretical time that would be needed for the execution of the application if carried out under perfect conditions, that is, with fully dedicated and failure-free machines. Table 2 gives the IETs (in minutes) for the scenarios of this study. Number of tasks

Task (seconds)

50 50 75 75

3600 7200 3600 7200

Turnaround (minutes) Fast Balanced 70.91 115.93 141.83 231.85 106.37 141.83 212.74 283.66

Table 2: IET for the pair of tasks/machine sets

4.1 Weekdays Figure 1 depicts the behaviour on weekdays of the shared (prefixed with “S”) and private (prefixed with “P”) versions of the three scheduling policies for the execution of 50 tasks, each one requiring 3600 seconds of CPU time in the reference machine. In all cases, the checkpoint sharing policies consistently outperform the scheduling methodologies based on private checkpoints. This is mostly due to the rapid detection by shared-based methodologies of interrupted executions, permitting a fast rescheduling of interrupted tasks on other machines. On the contrary, private schemes only detect an interruption when the associated timeout expires, thus losing precious time between the failure occurrence and the restart of the task on another machine. Indeed, the

importance of timeouts is demonstrated by the fact that the adaptive timeout policy greatly improves the performance of the private versions but without bringing any benefit to the shared checkpoint methodologies. Another observation that reinforces the early failure detection as an important factor in the performance improvement is that execution times are not significantly influenced by the checkpoint frequency, since checkpointless executions perform closely to one and nine checkpoints executions.

policy, since it performance equally as other policies, yet requiring much less effort to install and set up.

Figure 2: Execution time ratio for the “Fast” set (top) and for the “Balanced” set (bottom) for 50 tasks of 3600 seconds on weekends

5. Related work

Figure 1: Execution time ratio for the “Fast” set (top) and for the “Balanced” set (bottom) for 50 tasks of 3600 seconds on weekdays

The replication of tasks has only a minor positive effect on the performance of shared schemes. This was expected for the Fast set since all machines of the group have the same performance and thus replication only improves performance if the original machine is interrupted. Although more noticeable, performance gains are still marginal for the Balanced set. Overall, replicating tasks does not yield much performance improvement. Note that in this study we limited the replication factor to two, meaning that at any time only two instances of a task could exist.

4.2 Weekends Figure 2 shows the execution time ratio for the 50 tasks of 3600 seconds each, executed over weekend periods. All scheduling policies present similar performances, with a barely perceptible advantage for the FCFS policies. This uniform behaviour is related to the stability of the machines, since practically no volatility exists during weekends. Based on these results, we can conclude that for stable environments, private FCFS is an appropriate

Kondo et al. [16] thoroughly studied scheduling methodologies oriented toward optimization of turnaround times. Similarly to our approach, the study was based on trace-based simulations collected from the Entropia desktop grid environment [17]. The analysed strategies were resource prioritisation (giving work preferably to the fastest machines), resource exclusion (excluding the less performance machines from participating in the computation) and task duplication. However, the work did not considered task migration neither checkpointing. Moreover, the study targeted small sized-tasks, with the length of tasks being 5, 15 and 35 minutes of CPU time. The idle resource exploiting system Condor has built-in support for system-level checkpoints under the Unix environment [11]. Coupled with a central checkpoint server, this permits the Condor system to migrate tasks from non-idle computers to idle machines. The main drawbacks of this approach are tied to the limitations of system-level checkpoints: high-sized checkpoints and limited portability. In fact, Condor central checkpoint services can solely be used for migrating tasks between identical environments (i.e. compatible hardware and operating system), and currently only in Unix systems. Moreover, the size of checkpoints is frequently in the order of the hundredth megabytes, possibly stressing the network and the

centralized checkpoint server when multiple workers are concurrently requesting checkpoint services. Zhou and Lo [18] address the turnaround optimisation problem targeting peer-to-peer systems used for running embarrassingly parallel applications. They purpose a so-called wave scheduler that organises peers accordingly to their time zones. Under the Wave Scheduler, a client initially schedules its job on a host in the current nighttime zone. When the host machine is no longer idle, the job is migrated to a new nighttime zone. Thus, jobs ride a wave of idle cycles around the world to reduce turnaround time. Although an interesting idea, the exploitation of moving nighttime zones is only viable in wide scale areas.

[4]

6. Conclusions and future work

[7]

In this paper we integrated scheduling and fault-tolerance methodologies to promote faster turnaround time in desktop grid systems. Specifically we explore checkpoint location (private and shared), task migration, adaptive timeout and replication. An important conclusion is that, on the studied environment, strategies based on sharing checkpoints prove effective, clearly outperforming scheduling policies which rely solely on private checkpoints. However, as seen by the marginal influence of the number of checkpoints over the performance, the benefit of shared checkpoint methodologies comes mostly from the early detection and consequent reaction to interrupted task executions, and not from resuming execution from checkpoints. Another interesting outcome of this work lies in the fact that FCFS methodologies based on private checkpointing improves significantly when fitted with a simple adaptive algorithm that takes into account the worker machine’s performance and CPU time still needed by the task, to accordingly set an execution timeout. Finally, the methodologies herein presented do not bring any major benefit for stable environments like classrooms over the weekends. Indeed, under these conditions FCFS-AT scheduling methodologies based on private checkpoints perform on pair with policies based on shared checkpoints, without requiring the additional setup of a checkpoint server. As future work, we plan to evaluate shared checkpointing in a real desktop grid environment. Additionally, another intention is to expand our study to accommodate decentralized checkpointing storage, assessing the benefits that such an approach might yield. Finally, we intend to assess other heuristics for scheduling methodologies, namely policies based on time series prediction.

References [1] [2] [3]

[5] [6]

[8]

[9] [10]

[11]

[12] [13]

[14] [15]

[16]

[17]

[18]

SETI, "SETI@Home Project (http://setiathome.berkeley.edu/)," 2005. J. Bohannon, "Grassroots supercomputing," Science, vol. 308, pp. 810-813, 2005. D. G. Heap, "Taurus - A Taxonomy of Actual Utilization of Real UNIX and Windows Servers," IBM White Paper GM12-0191, 2003. P. Domingues, P. Marques, and L. Silva, "Resource Usage of Windows Computer Laboratories," presented at ICPP Workshops 2005, Oslo, Norway, 2005. D. Anderson, "BOINC: A System for Public-Resource Computing and Storage," at 5th IEEE/ACM International Workshop on Grid Computing, Pittsburgh, USA., 2004. G. Fedak, C. Germain, V. Neri, and F. Cappello, "XtremWeb: A Generic Global Computing System," presented at 1st Int'l Symposium on Cluster Computing and the Grid (CCGRID'01), Brisbane, 2001. A. Luther, R. Buyya, R. Ranjan, and S. Venugopal, "Alchemi: A.NET-Based Enterprise Grid Computing System," presented at 6th International Conference on Internet Computing (ICOMP'05), Las Vegas, USA, 2005. A. Chien, B. Calder, S. Elbert, and K. Bhatia, "Entropia: architecture and performance of an enterprise desktop grid system," Journal of Parallel and Distributed Computing, vol. 63, pp. 597-610, 2003. UnitedDevices, "United Devices, Inc. (http://www.ud.com)." S. Choi, M. Baik, C. Hwang, J. Gil, and H. Yu, "Volunteer availability based fault tolerant scheduling mechanism in DG computing environment," presented at 3rd IEEE International Symposium on Network Computing and Applications (NCA'04), 2004. M. Litzkow, T. Tannenbaum, J. Basney, and M. Livny, "Checkpoint and Migration of UNIX Processes in the Condor Distributed Processing System," University of Wisconsin, Madison, Computer Sciences. Technical Report 1346, 1997. M. Massie, B. Chun, and D. Culler, "The Ganglia Distributed Monitoring System: Design, Implementation, and Experience.," Parallel Computing, 2004. P. Domingues, P. Marques, and L. Silva, "DGSchedSim: a trace-driven simulator to evaluate scheduling algorithms for desktop grid environments," at 14th Parallel, Distributed and Network-Based Processing (PDP'06), Montbéliard, France, 2006. BYTE, "BYTEmark project page (http://www.byte.com/bmark/)," Byte, 1996. P. Domingues, P. Marques, and L. Silva, "Distributed Data Collection through Remote Probing in Windows Environments," at 13th Parallel, Distributed and NetworkBased Processing (PDP'05), Lugano, Switzerland, 2005. D. Kondo, A. Chien, and H. Casanova, "Resource management for rapid application turnaround on enterprise desktop grids," presented at 2004 ACM/IEEE conference on Supercomputing, 2004. D. Kondo, M. Taufer, C. Brooks, H. Casanova, and A. Chien, "Characterizing and evaluating desktop grids: an empirical study," at 18th International Parallel and Distributed Processing Symposium (IPDPS'04), 2004. D. Zhou and V. Lo, "Wave Scheduler: Scheduling for Faster Turnaround Time in Peer-based Desktop Grid Systems," presented at 11th Workshop on Job Scheduling Strategies for Parallel Processing (ICS 2005), Cambridge, MA, 2005.

Sharing Checkpoints to Improve Turnaround Time in Desktop Grid ...

In this paper, we present a checkpoint sharing methodology to improve turnaround time of applications run over desktop grid environments. In fact, volatility of ...

172KB Sizes 4 Downloads 197 Views

Recommend Documents

A DHT-based Infrastructure for Sharing Checkpoints in Desktop Grid ...
A DHT-based Infrastructure for Sharing Checkpoints in Desktop Grid. Computing. Patricio Domingues. School of Technology and Management. Polytechnic ...

Using Checkpointing to Enhance Turnaround Time on ...
We propose to share checkpoints among desktop machines in order to ... demand, and prediction-based checkpointing combined with replication. We used a set of .... to implement their practical assignments, and to access email and the web.

Defeating Colluding Nodes in Desktop Grid Computing Platforms
Unlike the basic M1-type, a colluding saboteur (further referred as ..... 8. Algorithm 1 The simple EigenTrust algorithm [9]. Input data: C = (ci, j) a matrix of size n×n, with ∑j ci, j = 1 some small error ε t0 = (t. (0) i. ), with t. (0) i. = 1

Defeating Colluding Nodes in Desktop Grid Computing ...
dard in the near future. For example ... Some of the developers of XtremWeb [3] are working on ... as the master) which distributes work units of an application to workers. ... From the redundancy point of view, a more efficient me- thod for error ..

Defeating Colluding Nodes in Desktop Grid Computing ...
[email protected]. Abstract ... A desktop grid system consists of a server (referred fur- ther as the ... a game-theoretical analysis, based on the traditional tax-.

Inter-area Real-time Data Exchange to Improve Static Security ...
external system modeling, real-time data exchange. I. INTRODUCTION. Power system operation relies on accurate and continuous monitoring of the operating ...

In Response to: What Is a Grid?
Correspondence: Ken Hall, BearingPoint, South Terraces Building, ... the dynamic creation of multiple virtual organizations and ... Int J. Supercomp App. 2001 ...

In Response to: What Is a Grid?
uted and cost-effective way to boost computational power to ... Correspondence: Ken Hall, BearingPoint, South Terraces Building, .... Int J. Supercomp App. 2001 ...

validating desktop grid results by comparing ...
This allows one to take proactive and corrective measures without having to wait for ... circumvent Internet asymmetries [16] caused by NAT and firewall schemes, ... error occurrence, it does not speed up the detection of incorrect computations,.

Scheduling for Fast Turnaround Time on Institutional ...
ing, e-mail, and Internet browsing barely use their resources. For instance, Heap [1] .... delivering fast turnaround time in institutional desktop grid environments.

ACCT_Targeted_Improvement_Plan_17_(1) TURNAROUND ...
Page 3 of 7. ACCT_Targeted_Improvement_Plan_17_(1) TURNAROUND IMPLEMENTATION.pdf. ACCT_Targeted_Improvement_Plan_17_(1) TURNAROUND ...

The value of sharing lead time information
In call centers it is possible to inform a cus- tomer of the anticipated queueing time. When a firm pro- vides the customer with more information about the time.

An Experimental Time-Sharing System - ACM Digital Library
It is the purpose of this paper to discuss briefly the need for time-sharing, some of the implementation problems, an experimental time- sharing system which has ...

A Flexible Approach to Efficient Resource Sharing in ...
multiple concurrent bursty workloads on a shared storage server. Such a situation ... minimum throughput guarantees [8, 16] (IOPS) or response time bounds [11 ...

evaluating strategies to improve flexible delivery in the ...
Peter Smith, Deakin University. Lyn Wakefield ... need to develop strategies that enhance the preparedness of learners for flexible learning in the workplace, and.

MIRSA Workshop Time Grid (2).pdf
Page 1 of 3. Schedule of Events. Wednesday, October 7. 2015 MIRSA State Workshop at Oakland University Presented by LifeFitnes. 2015 MIRSA Scholarships Golf Outing. presented by BSN Sports. Fieldstone Golf Club 1987 Taylor Road,. Auburn Hills, MI. 12

Using Relaxations to Improve Search in Distributed ...
Computer Science, University College Cork, Ireland. Abstract. Densely ..... Autonomous Agents and Multi-Agent Systems 3(2) (2000) 185–207. 4. Modi, P., Shen ...

C:\Users\user\Desktop\Eimi Time -
2012 kum chun Feder- er in a-17th channa. Grand Slam title ana mun ahi. Hiche hin masanga'a kal (week) 286 jen World. No. 1 ana hijou Pete Sam- pras record khel din Rog- er Federer in kal 302 jen. World no.1 ana hijouve. Chule Switzerland. Olympic tr

Recent Developments in DIET: From Grid to Cloud
the last few years, the Cloud phenom- enon has been .... Can Cloud Computing tools, developed notably by Web ... nology cannot be stored, consumption in.

Measures to improve Safety in Indian Railways.PDF
International Transport Workers' Federation (lTF) ... Ph. : 011-233433{5, 65027299, Rly. 22283, 22626,Fax: 01 1-23744013 ,Rly.22382, Telegram : RAILMMDOR.

Knowledge sharing in virtual communities.pdf
Page 2 of 20. Knowledge sharing in virtual communities 145. mid-1990s, while studies that take a more focused knowledge-sharing perspective have. been published in the 2000s. The studies derive from different disciplines, such as. computing and Infor