Web Server Farm in the Cloud: Performance Evaluation and Dynamic Architecture Huan Liu, Sewook Wee [email protected], [email protected] Accenture Technology Labs, San Jose, CA 95113, USA

Abstract. Web applications’ traffic demand fluctuates widely and unpredictably. The common practice of provisioning a fixed capacity would either result in unsatisfied customers (underprovision) or waste valuable capital investment (overprovision). By leveraging an infrastructure cloud’s on-demand, pay-per-use capabilities, we finally can match the capacity with the demand in real time. This paper investigates how we can build a web server farm in the cloud. We first present a benchmark performance study on various cloud components, which not only shows their performance results, but also reveals their limitations. Because of the limitations, no single configuration of cloud components can excel in all traffic scenarios. We then propose a dynamic switching architecture which dynamically switches among several configurations depending on the workload and traffic pattern.

1

Introduction

When architecting a web server farm, how much capacity to provision is one of the hardest questions to answer because of the dynamic and uncertain nature of web traffic. Many new web applications start with very little traffic since they are hardly known. One day, they may become famous when they hit media (e.g., the slash-dot effect), and visitors flock to the web site, greatly driving up the traffic. Few days later, as the media effect wears off, traffic goes back to normal. Such dramatic change in traffic is often hard, if not impossible, to forecast correctly, both in terms of the timing and the peak capacity required. Thus, it is difficult to determine when and how much capacity to provision. Even if an amount can be determined, provisioning a fixed set of capacity is not a satisfactory solution. Unfortunately, this is still a common practice today due to the difficulties in forecasting and the long lead time to procure hardware. If the capacity provisioned is less than the peak demand, some requests cannot be served during the peak, resulting in unsatisfied customers. On the other hand, if the capacity provisioned is more than the peak demand, large capacity is wasted idling during non-peak time, especially when the peak never materializes. Fig. 1 illustrates the degree of fluctuation a web application could experience in reality. Animoto, a startup company, saw its infrastructure needs grow from 40 servers to 5000 servers in a matter of few days when it was widely publicized. Few days following the peak, its infrastructure needs followed a similar degree of

Fig. 1. Animoto’s capacity change in response to fluctuation in traffic.

shrinkage, eventually settling down at around 50 servers. Such a dramatic change in the infrastructure requirement would mean either gross underprovisioning or gross overprovisioning if a fixed set of capacity is provisioned. An infrastructure cloud, such as Amazon’s EC2/S3 services [1], is a promising technology that can address the inherent difficulty in matching the capacity with the demand. First, it provides practically an unlimited infrastructure capacity (e.g., computing servers, storage) on demand. Instead of grossly overprovisioning upfront due to uncertain demands, users can elastically provision their infrastructure resources from the provider’s pool only when needed. Second, the pay-per-use model allows users to pay for the actual consumption instead of for the peak capacity. Third, a cloud infrastructure is much larger than most enterprise data centers. The economy of scale, both in terms of hardware procurement and infrastructure management and maintenance, helps to drive down the infrastructure cost further. A cloud-based web server farm could dynamically adjust its size based on the user demands. It starts with as little as one web server. During traffic peak, the server farm automatically and instantaneously spawns up more web servers to serve the demand. In comparison, in a traditional enterprise infrastructure, such a scale up both takes a long time (months) and requires manual intervention. Similarly, as traffic goes away, the server farm can automatically shrink down its capacity. Again, scaling down (and stop paying for it) is very hard to achieve in a traditional enterprise infrastructure. This paper describes how to build this cloud-based web server farm. More specifically, we present the following contributions. 1. Performance evaluation: Due to its business model, a cloud only provides commodity virtual servers. Compared to high-end specifically-designed web servers, their base performance and performance bottleneck points are different. We evaluate the performance of cloud components through the SPECweb2005 benchmark [2]. Through the study, we identify several performance limitations of the cloud: no hardware load balancer is available, a

software load balancer has limited capacity, web-services-based load balancer has limited scalability, and traditional techniques to design high performance web server farms do not apply in the cloud for security reasons. To the best of our knowledge, this is the first performance study of cloud from an application’s perspective. 2. Dynamic switching architecture: Through the performance study, we identify several cloud configurations that can be used to host web applications; each has its own strength and weakness. Based on this evaluation, we propose a dynamic switching architecture which dynamically switches among the configurations based on the workload and traffic patterns. We discuss the criteria for switching, and how to switch in real time.

2

Understanding Cloud Components

Unlike in a traditional infrastructure, where an application owner can choose from any infrastructure component, a cloud only offers a limited number of components to choose from. Understanding the capabilities and limitations of cloud components is a prerequisite for migrating an existing application architecture or designing a new one. In this section, using SPECweb [2], a web server benchmark, we study the performance of several cloud components: 1) Amazon EC2 instances (virtual machines), 2) Google App Engine, 3) Amazon Elastic Load Balancing web services, and 4) Amazon S3. Then, we assess their performance as either a web server or a load balancer. 2.1

Performance Assessment Setup

To assess the performance of cloud components, we use the SPECweb2005 benchmark [2], which is designed to simulate a real web application. It consists of three workloads: banking, support, and ecommerce. As its name suggests, the banking workload simulates the web server front-end of an online banking system. It is the most CPU intensive workload amongst the three because it handles all communications through SSL for security reasons and because most of the requests and responses are short. On the other hand, the support workload simulates a product support website where users download large files such as documentation files and device drivers. It stresses the network bandwidth the most; all communications are through regular HTTP and the largest file to download is up to 40MB. The ecommerce workload simulates an E-commerce website where users browse the site’s inventory (HTTP) and buy items (SSL). Therefore, in terms of workload characteristics, it is a combination of the above two. For simple comparison purpose, hereafter we will only focus on banking and support because they stress the CPU and the network bandwidth, respectively. Note that the memory capacity is often the performance bottleneck in web servers. However, through our extensive benchmark, we observe that EC2 instances have enough memory compared to other resources; a standard instance (m1.small, m1.large, or m1.xlarge) is usually bounded by the CPU because it has relatively more

memory than CPU (1.7 GB memory per 1 GHz computing power); a high-CPU instance (c1.medium or c1.xlarge) is usually bounded by the network bandwidth (800 Mbps). The SPECweb benchmark consists of two components: the web application and the traffic generator. The web application implements the backend application logic. All pages are dynamically generated and a user can choose from either a PHP implementation or a JSP implementation. The traffic generator generates simulated user sessions to interact with the backend web application, where each session simulates an individual browser. The traffic generator could run on several servers in order to spread out the traffic generating workload. The performance metric for the benchmark is the number of simultaneous sessions that the web server can handle while meeting its QoS requirement. For each test, the load generator generates a number of simultaneous sessions, as specified by the user, and it collects the response time statistics for each session. A test passes if 95 % of the pages return within TIME TOLERABLE and 99 % of the pages return within TIME GOOD, where TIME TOLERABLE and TIME GOOD are specified by the benchmark and they represent the QoS requirement. To find the maximum number of sessions, we have to try a number of choices of the number of user sessions until we find one that passes the QoS requirement. The traffic generator is hosted in Amazon EC2 since our Labs’ WAN network is not fast enough to support high traffic simulation. We focus only on the larger cloud platforms – Google and Amazon – because they are currently widely used. Within Amazon, we choose to profile only a few types of EC2 instances1 to illustrate the capabilities and limitations of these cloud components. The instances we use include the smallest EC2 instance: m1.small, which is the smallest and cheapest unit of scaling, thus it provides the finest elastic provisioning granularity. We also evaluate c1.medium which has 5 times the computing power of m1.small, and c1.xlarge which has 20 times the computing power of m1.small. All Amazon instances have a half-duplex Gigabit network interface and they are able to transmit at around 800 Mbps (input and output combined) according to our independent tests. 2.2

Amazon EC2 Instance as a Web Server

Table 1 shows the performance results for four different combinations of workloads and EC2 instances. The m1.small instance is CPU-bounded and it is not able to saturate the network interface for both the support and banking workloads. Since support is not CPU intensive, we are able to saturate the network with a slight increase in the CPU power by using c1.medium. Note that, with linear projection, the largest instance would saturate the network bandwidth for banking with 18,000 simultaneous sessions. However, due to a bug with the Rock web server (which they are currently fixing), the c1.xlarge instance became saturated with 7,000 simultaneous sessions because the requests from clients were not evenly distributed to all eight CPUs. 1

Instance is Amazon’s term for a virtual server

CPU Network # of load bandwidth sessions (%) (Mbps) Banking on m1.small 90 60 1,350 Banking on c1.xlarge 20 310 7,000 Support on m1.small 90 500 1,190 Support on c1.medium 20 800 1,800 Table 1. Single EC2 instance performance as a web server

2.3

Amazon EC2 Instance as a Load Balancer

There are two reasons to use a load balancer for high traffic sites. First, one may want to use m1.small as the unit of auto-scaling to minimize the cost. As shown in the last section, m1.small is not able to fully utilize the network bandwidth for either workload. Second, a web application may be more CPUhungry compared to the SPECweb benchmarks and thus may need to scale beyond a single instance’s computation capacity. Since Amazon does not offer a hardware load balancer as a building block, we have to use a software load balancer hosted on a virtual server. Beyond a virtual server’s capacity limit, the cloud can further limit the scalability of a software load balancer because of security requirements. For example, for security reasons, Amazon EC2 disabled many layer 2 capabilities, such as promiscuous mode and IP spoofing. Traditional techniques used to scale software load balancers, such as TCP handoff [3] and direct web server return [4], do not work because they assume the web servers could take the same IP address as the load balancer. There are many software load balancer implementations. Some, such as the Linux Virtual Server [5], do not work in the Amazon environment because they require the ability to spoof their IP address. We profiled several that work well in the Amazon environment including HaProxy [6], Nginx [7] and Rock [8]. Both HaProxy and Nginx forward traffic at layer 7, so they are less scalable because of SSL termination and SSL renegotiation. In comparison, Rock forwards traffic at layer 4 without the SSL processing overhead. For brevity, we only report the performance of the Rock load balancer running on an m1.small instance. For the banking workload, an m1.small instance is able to process 400 Mbps traffic. Although we are not yet able to run the Rock load balancer on a bigger instance due to a bug in the software, we believe running on a c1.medium instance can easily saturate the full network interface speed. For the support workload, the requests are mostly for long file transfers. Therefore, the load balancer needs to do less work since each packet is big and there are fewer packets to relay. As a result, an m1.small instance is able to handle the full 800 Mbps bandwidth. Because the load balancer does not process the traffic, but rather, only forwards the packets, we expect the results to hold for other web applications.

For each incoming (outgoing) packet, the load balancer must first receive the packet from the client (web server) and then send it to the web server (client). Therefore, the effective client throughput is only half of the network interface throughput, i.e., even if we saturate the load balancer’s network interface, the client throughput is only 400 Mbps. We must take into account the tradeoff when deciding between running a single web server versus running a load balancer with several web servers. A load balancer can only handle half the traffic because of cloud limitations; however, we can scale the number of web servers in the back end especially if the web application is CPU intensive. A single web server can handle a larger amount of traffic; however, care must be taken to ensure that the CPU does not become the bottleneck before the network interface. 2.4

Google App Engine as a Load Balancer

Because running a web presence is a common usage case for a cloud, there are dedicated cloud offerings specifically targeted at hosting web applications. For example, Google App Engine [9] promises to transparently scale a web application without limit. Although possible in theory, we found that it is not as easy to scale a web application in reality. Again, we use SPECweb to profile App Engine’s performance. Google App Engine is currently limited to the Python and Java programming languages. Java support is still in beta, where many java packages are not allowed to run for security reasons. Since the SPECweb benchmark has only PHP and JSP implementations, it is not straightforward to port to App Engine. Instead, we implemented a load balancer in App Engine. Both the load generators and the web servers run in Amazon EC2, but all web requests are first sent to the load balancer front end. Then, they are forwarded to the web servers in EC2. Initially when we tested App Engine, even a test with 30 simultaneous sessions fails because we are exceeding the burst quota. Over the course of 5 months (since early Feb. 2009), we have been working with the App Engine team trying to increase our quota limit to enable testing. However, getting around the quota limit seems to require significant re-engineering. As of the date of the submission, we are only able to pass a banking test at 100 simultaneous sessions. Beyond the performance limits, App Engine has a number of other limitations. It currently only supports the Python and Java programming languages. In addition, incoming and outgoing requests are limited to 10 MB per request (it was 1 MB until Mar. 2009), so the SPECweb support workload will fail. However, App Engine does have a rich programming library and an integrated persistent data store. When considering App Engine as a possible cloud component for hosting a web site, we must consider the tradeoffs. For example, we choose to design the server farm monitoring capabilities in App Engine to benefit from its rich graphic library and the persistent store for our monitoring data. Since the monitoring engine is only collecting data from a handful of servers, instead of responding to thousands of user queries, App Engine can easily handle the workload.

2.5

Amazon Elastic Load Balancing

Recently, Amazon Web Services announced the dedicated load balancing service: Elastic Load Balancing (ELB) [10]. It provides a virtual load balancer with a DNS name and the capability to add or remove backend web servers and check their health. Because the ELB interface is not inherently tied to a single server, we believe it has the potential to scale and address the performance bottlenecks of software load balancers discussed in Section 2.3. However, our evaluation shows that it currently does not scale better than a single instance Rock load balancer. Again, we use SPECweb to evaluate the Amazon ELB service. To stress the bandwidth limit, we use the support workload. Recall that 1800 sessions saturate the network bandwidth of a single web server running on a c1.medium instance and 900 sessions saturate the network bandwidth of a Rock load balancer. Amazon ELB fails to meet the QoS requirement above 850 sessions and refuses to serve more than 950 sessions complaining that the server is too busy. Therefore, we conclude that Amazon ELB’s current scalability is about the same as a single instance Rock load balancer. Interestingly, Amazon ELB is an expensive alternative to the Rock load balancer. Rock load balancer costs $0.10 per hour (the instance cost), whereas Amazon ELB costs $0.025 per hour plus $0.008 per GB of the traffic it processes. Hence, with more than 22 Mbps of traffic, Rock load balancer is cheaper. Moreover, we do not need a load balancer with traffic smaller than 22 Mbps, unless the web application is very CPU intensive, which is not the case for all three workloads in the SPECweb benchmark. 2.6

Amazon S3 as a Web Server for Static Content

Another cloud component that can be used for hosting a web presence is Amazon S3. Although designed for data storage through a SOAP or REST API, it can additionally host static web contents. To enable domain hosting in S3, we have to perform two steps. First, we have to change the DNS record so that the domain name’s (e.g., www.testing.com) CNAME record points to S3 (i.e., s3.amazonaws.com). Second, we have to create a S3 bucket with the same name as the domain (i.e., www.testing.com) and store all static web pages under the bucket. When a client requests a web page, the request is sent to S3. S3 first uses the “Host” header (a required header in HTTP 1.1) to determine the bucket name, then uses the path in the URI as the key to look up the file to return. Since the SPECweb benchmark dynamically generates the web pages, we cannot evaluate S3 directly using the benchmark. Instead, we host a large number of static files on S3, and we launch a number of EC2 m1.small instances, each has 10 simultaneous TCP sessions sequentially requesting these files one by one as fast as it is able to. Figure 2 shows the aggregate throughput as a function of the number of m1.small instances. As shown in the graph, S3 throughput increases linearly. At 100 instances, we achieved 16 Gbps throughput. Since

aggregate throughput (Gbps)

18 16 14 12 10 8 6 4 2 0 10

20

30

40 50 60 70 num of EC2 instances

80

90

100

Fig. 2. Aggregate S3 throughput as a function of the number of EC2 instances who simultaneously query S3.

we are accessing S3 from EC2, the latency is all below the TIME GOOD and TIME TOLERABLE parameters in SPECweb. The Amazon CloudFront offering enables S3 to be used as geographically distributed content distribution servers. One can simply enable CloudFront on a bucket by issuing a web service API call, and the content is automatically cached in geographically distributed servers. Since we do not have access to a geographically distributed set of servers, we are not able to evaluate CloudFront’s performance from the end-users’ perspective. But, in theory, it should offer the same scale as S3 with the additional benefits of reduced latency when accessing from remote locations. Although highly scalable, S3 has two limitations. First, it can only host static content. Second, in order to use S3 as a web hosting platform, a client can only access the non-SSL end point. This is because S3 needs the “Host” header to determine the bucket, and SSL would hide this information.

3

Dynamic Switching Architecture

As we have seen, there is not a single cloud configuration that is able to satisfy requirements of all web applications. For CPU intensive web applications, it is beneficial to use a load balancer so that the computation could be spread across many instances. However, for network intensive applications, it is better to run them on a standalone instance, possibly one with high CPU computation capacity, to maximize the network throughput. Yet, for even more network intensive applications, it may be necessary to use DNS load balancing to get around a single instance’s bandwidth limitation. If an application’s workload is mostly static or if the application cannot tolerate the slightest disruption, the performance study we performed in the last section can help to pick the best static configuration. One can pick a cloud configuration based on the peak demand, assuming the peak demand can be ac-

curately forecasted, but the application cannot enjoy the full economical benefits of a cloud when the demand goes away. Unfortunately, most applications’ usage fluctuates over time. It is conjectured [11] that the diurnal pattern of network activity is one of the few invariant properties in Internet, which is mostly due to human-related daily activities. Many web applications exhibit even greater fluctuation, for example, during flash crowd. Moreover, web applications’ workload characteristic itself is not static, but rather a mix of many. The overall workload characteristic could shift as the usage pattern shifts. Consider a sample website that hosts entertainment contents. During daytime, a majority of the traffic may be for CPU intensive content search. However, during night time, network intensive multi-media content streaming may dominate. In this paper, we propose a dynamic switching architecture which chooses the most appropriate cloud configuration based on the application workload. This is enabled by the dynamic capabilities offered by a cloud.

3.1

Cloud Configurations

The dynamic switching architecture may employ one of the following four cloud configurations. 1. Small instance: A single web server running on an m1.small instance. This is the cheapest way to run a web presence and it is used when the web application is least loaded, such as during the night. 2. Load balancer: A single software load balancer running on a c1.medium instance. It balances traffic to a number of web servers running on m1.small instances (the number of them is automatically adjusted based on the traffic volume) in the backend. Having them as the smallest unit of auto-scaling means that we incur the least cost. As our performance result shows, a single c1.medium instance is able to load balance traffic up to its network interface speed, even when the packet size is small. 3. Large instance: A single web server running on a c1.xlarge instance. A c1.xlarge instance has the highest CPU power (20 times greater than m1.small) and it can saturate the network interface even for applications that are more CPU intensive than the banking workload, 4. DNS: For applications that may require more than 800 Mbps bandwidth or more computation capacity than a c1.xlarge can provide, there is currently not a single cloud component from Google and Amazon that can handle the workload. We have to resort to DNS level load balancing. In this configuration, we run several web servers on c1.xlarge instances (the number of them is automatically adjusted based on the traffic volume). Then, we program the DNS to point to a list of IP addresses, each one corresponding to one instance.

3.2

Switching Mechanism and Criteria

To switch between the different configurations without affecting customers’ experience, we leverage a cloud’s dynamic capability. Specifically, we use Amazon’s Elastic IP feature to reassign the same IP address to different configurations. When we determine that there is a need to switch, we first launch the new configuration, program the Elastic IP to point to the new configuration, then shut down the old configuration. In our experiments, re-programming Elastic IP takes less than 5 seconds to complete, thus it has a minimal impact on the application. For example, we ran SPECweb support workload with 100 sessions and it finished with no error even though we re-programmed elastic IP from one configuration to another. New requests during the re-programming period will be buffered, thus they may experience a longer latency than usual, and existing requests may be terminated. Although some requests may be terminated, a simple retry will fix the problem if we have the session management mechanism in place as described in Section 3.3. We actively monitor the CPU and bandwidth usage on each instance in order to determine whether we need to switch between the different configurations. The rules for switching are shown in Fig. 3. For clarity, we only show the transition to scale up. The logic to scale down when workload reduces is exactly the opposite. Each configuration has its own performance limit as follows. – Small instance: CPU capacity is one EC2 Compute Unit, where one EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. Network capacity is roughly 800 Mbps. – Load balancer: CPU capacity is unlimited because we can scale an arbitrary number of web servers in the backend. Network capacity is roughly 400 Mbps client traffic. The load balancer network interface will still experience 800 Mbps traffic because it relays traffic to the backend. – Large instance: CPU capacity is 20 ECU. Network capacity is roughly 800 Mbps. – DNS: Unlimited CPU and network capacity. In reality, the network speed is limited by the overall bandwidth into the Amazon cloud. Unfortunately, we are not able to test this limit because we do not have access to hosts outside Amazon that are able to generate a high load. We make the switching decision when the workload approaches the capacity of a configuration. The configuration we switch to depends on the projection whether the current workload will overload the new configuration. Note that, depending on the application characteristics, we may not switch from one configuration to the immediate next configuration. For example, let us consider a web application that is very CPU intensive. It would run fine under the load balancer configuration since we can unlimitedly scale the number of web servers sitting behind the load balancer. However, when the aggregate traffic exceeds 400 Mbps, we may not be able to switch to a large instance configuration as a c1.xlarge instance may not be able to handle the CPU workload. Instead, we

Fig. 3. Switching decision logic when switching to a higher configuration.

will switch to the DNS configuration directly when traffic increases. We make this decision automatically based on the measured traffic consumption and CPU load, and then project the workload on the target configuration. 3.3

Session Management

To avoid state replication, the dynamic switching architecture requires the session states to be stored in the client, typically in the form of a browser cookie. If the server state can be captured compactly, the server can set the entire state in a browser cookie. When the client browser makes the next request, the state is returned in the cookie in the request, so that the server (which can be different from the original server after configuration switching) can decipher the last state. If the server state is large, it can be captured in a central database and the

server only needs to return a token. Since the central database will remain the same across configuration changes, a new server can still look up the complete previous state. Since the central database only captures state information, it is easier to make it sufficient scalable to handle the whole web application.

4

Conclusions

In this paper, we investigate how to build a web server farm in a cloud. We present a benchmark performance study of the various existing cloud components, which not only shows their performance results, but also reveals their limitations. First, a single web server’s performance is an order of magnitude lower than the state-of-art web server hardware solutions. Second, the performance of a software load balancer based approach is limited by both a single network interface and traffic relaying, which halves its effective throughput. Third, both Google App Engine and Amazon’s Elastic Load balancing fall short of their promise of unlimited scalability. Finally, Amazon S3 scales the best, but it is only a viable option for static content. Due to these limitations, there is not a single configuration that can satisfy all traffic scenarios. We propose a dynamic switching architecture which can switch between different configurations based on detected workload and traffic characteristics. We discuss the switching criteria and how we use the cloud’s dynamic capability to implement the architecture. The dynamic switching architecture achieves the highest scalability while incurring the least cost.

References 1. Amazon Web Services: Amazon Web Services (AWS) http://aws.amazon.com. 2. SPEC: Specweb2005 benchmark. http://spec.org/web2005/ 3. Hunt, G., Nahum, E., Tracey, J.: Enabling content-based load distribution for scalable services. Technical report (1997) 4. Cherkasova, L.: Flex: Load balancing and management strategy for scalable web hosting service. In: In Proceedings of the Fifth International Symposium on Computers and Communications (ISCC’00. (2000) 8–13 5. LVS: Linux virtual server project. http://www.linuxvirtualserver.org/ 6. HaProxy: HaProxy load balancer. http://haproxy.1wt.eu/ 7. Nginx: Nginx web server and load balancer. http://nginx.net/ 8. Accoria: Rock web server and load balancer. http://www.accoria.com 9. Google Inc: Google App Engine http://code.google.com/appengine/. 10. Amazon Web Services: Elastic Load Balancing http://aws.amazon.com/elasticloadbalancing/. 11. Floyd, S., Paxson, V.: Difficulties in simulating the internet. IEEE/ACM Trans. on Networking 9(4) (Aug. 2001) 392–403

Web Server Farm in the Cloud: Performance Evaluation ...

curement and infrastructure management and maintenance, helps to drive down the infrastructure cost further. ... workload simulates the web server front-end of an online banking system. It is the most CPU intensive .... We must take into account the tradeoff when deciding between running a single web server versus ...

240KB Sizes 1 Downloads 146 Views

Recommend Documents

TEACHER PROFESSIONAL PERFORMANCE EVALUATION
Apr 12, 2016 - Principals are required to complete teacher evaluations in keeping with ... Certification of Teachers Regulation 3/99 (Amended A.R. 206/2001).

CDOT Performance Plan Annual Performance Evaluation 2017 ...
48 minutes Feb.: 61 minutes March: 25 minutes April: 44 minutes May: 45 minutes June: 128 minutes 147 minutes 130 minutes. Page 4 of 5. CDOT Performance Plan Annual Performance Evaluation 2017- FINAL.pdf. CDOT Performance Plan Annual Performance Eval

CDOT Performance Plan Annual Performance Evaluation 2017 ...
84% 159% 160% 30% 61% 81%. 113%. (YTD) 100% 100%. Whoops! There was a problem loading this page. Retrying... Whoops! There was a problem loading this page. Retrying... CDOT Performance Plan Annual Performance Evaluation 2017- FINAL.pdf. CDOT Performa

Evaluation of the Performance/Energy Overhead in ...
compute and energy intensive application in energy constrained mobile devices. ... Indeed, the use of parallelism in data processing increases the performance.

PERFORMANCE EVALUATION AND ...
As uplinks of most access networks of Internet Service Provider (ISP) are ..... event, the TCP sender enters a slow start phase to set the congestion window to. 8 ...

DownloadPDF Working in the Cloud: Using Web-Based ...
how to enforce security in the cloud, manage small group collaborations, customize tools to your unique needs, and achieve real-time collaboration with.

Performance evaluation of local features in human ...
Sep 8, 2008 - where ai a weight coefficient; hi(Б) a weak learner and NS the number of .... software-datasets/PedestrianData.html) consists of 924. Figure 5 ...

Web Vulnerability Scanners Evaluation - Darknet
vendors(testphp.acunetix.com, demo.testfire.net, zero.webappsecurity.com) and I've done ...... Before starting this evaluation my favorite scanner was AppScan.

Performance Evaluation of an EDA-Based Large-Scale Plug-In ...
Performance Evaluation of an EDA-Based Large-Scale Plug-In Hybrid Electric Vehicle Charging Algorithm.pdf. Performance Evaluation of an EDA-Based ...

FPGA Implementation Cost & Performance Evaluation ...
IEEE 802.11 standard does not provide technology or implementation, but introduces ... wireless protocol for both ad-hoc and client/server networks. The users' ...

Performance Evaluation of Equalization Techniques under ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue ... Introduction of wireless and 3G mobile technology has made it possible to ...

PERFORMANCE EVALUATION OF CURLED TEXTLINE ... - CiteSeerX
2German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany ... Curled textline segmentation is an active research field in camera-based ...