Comparing Real-Time Calculus with the Existing Analytical Approaches for the Performance Evaluation of Network Interfaces Godofredo R. Garay1, Julio Ortega2, Vicente Alarcón-Aquino3 1

Facultad de Informática, Universidad de Camagüey, Cuba Departmento de Arquitectura y Tecnología de Computadores, Universidad de Granada, España 3 Departamento de Ingeniería Electrónica y Mecatrónica, Universidad de las Américas, Puebla, México 2

presented in Section 4. A discussion of the principal findings is presented in Section 5.The paper is concluded in Section 6.

Abstract In this paper we compare an analytical framework based on Real-Time Calculus with the existing analytical approaches commonly used for the performance evaluation of network interfaces such as probabilistic queuing models, parallel computation model, and protocol offload models (LAWS and EMO). In particular, we focus on the capabilities of these alternatives that can be employed for the performance evaluation of the NIC’s buffer requirements in a network node.

2. Motivation and Background 2.1. Definition of Network Interface A network interface allows a computer system to send and receive packets over a network. It consists of two components: (1) a network adapter (also known as host interface, Network Interface Card, or NIC), the hardware that connects the network medium with the host I/O bus, moves data, generates communication events, provide protection and (2) the network software (often, we refer to as the software) on the host that handles application communication requests, implements communication protocols and manages the adapter. In terms of the OSI Model, a conventional NIC (hardware component) operates at Data Link layer whereas the software operates at data link, network and transport layer. A part of the data-link layer functions are performed in software by the data-link driver (NIC driver). Modern NICs include layer 3/4 functions such as checksum calculation, TCP offloading.

1. Introduction Because of the complexity of computer systems, quantitative evaluation has become the mainstay of computer architecture research. Computer architecture’s heavy emphasis on simulation effectively discourages the research community from using and proposing analytical models, although these models can help to understand a system in ways that simulation does not and they can also model the expected impact of future hardware modifications, thereby avoiding unnecessary costs associated with more detailed simulation [1, 2]. References [3, 4] consider that performance evaluation methodologies based on simulation and direct measurements should be used with care as only a finite set of initial states, environment behaviors and execution traces might be considered, and the corner cases that lead to a worst case or best case execution time might be ignored. In this paper we compare an analytical framework based on Real-Time Calculus (RTC) with the existing analytical approaches commonly used for the performance evaluation of network interfaces in systems connected to LAN networks (see [5]) such as probabilistic queuing models, parallel computation model, and protocol offload models (LAWS and EMO). In particular, we focus on the capabilities of these alternatives that can be employed for the performance evaluation of the NIC’s buffer requirements (i.e., maximum backlog) in a network node. RTC has been used in the context of Controller Area Network (CAN) [6] and automotive communication systems.[7] The paper is organized as follow. In Section 2 we present the motivation of this work and some background information. Existing analytical approaches are presented in Sections 3 and the main features of Real-Time Calculus are

2.2. Modeling the NIC Buffer Requirements With the advent of end-to-end dedicated circuits, the congestion has moved from the network to the edge of the network, namely the end-system. In such a case, packets may get dropped due to buffer overflow at the NIC if interrupts are not serviced by the OS within the appropriate time to transfer the packets from the NIC buffer to the main memory [8]. This situation motivates this work. of the analytical model. The transfer time to move data (packet payload) and control information (descriptors) from the NIC to the IP queue (receive socket buffers) depends on system components characteristics (NIC processor and I/O bus speeds, DMA burst size, memory latency, etc) and their operation. From the point of view of this work, we are interested in modeling (analytically) these hardware and software components.

119

978-1-4244-9557-3/11/$26.00 ©2011 IEEE

three main elements of a service center: a population of customers, the service facility and the waiting line. Also within the scope of queuing theory is the case where several service centers are arranged in a network and a single customer can walk through this network at a specific path, visiting several service centers. Some examples of the use of queuing theory in networking are the dimensioning of buffers in routers, calculating end-to-end throughput in networks and so forth. QT tries to answer questions like e.g. the mean waiting time in the queue, the mean system response time (waiting time in the queue plus service times), mean utilization of the service facility, distribution of the number of customers in the queue, distribution of the number of customers in the system and so forth. These questions are mainly investigated in a stochastic scenario, where e.g. the inter-arrival times of the customers or the service times are assumed to be random. Queuing systems may not only differ in their distributions of the inter-arrivaland service times, but also in the number of servers, the size of the waiting line (infinite or finite), the service discipline and so forth. Some common service disciplines are: FIFO (First in, First out), LIFO (Last in, First out), Random Service, Round Robin, and Priority Disciplines. The most simple queuing system is the M/M/1 system (with FIFO service). In QT, we are mainly interested in steady state solutions, i.e. where the system after a long running time tends to reach a stable state. Some performance metrics analyzed in M/M/1-Queue system are the utilization, mean number of customers in the system, mean response time, the size of the waiting line that is required to lose customers only with a small probability, etc. [16] also considers systems that are represented as a network of queues. One basic classification of queuing networks is the distinction between open and closed queuing networks. In an open network new customers may arrive from outside the system (coming from a conceptually infinite population) and later on leave the system. In a closed queuing network the number of customers is fixed and no customer enters or leaves the system. Again, in [5] several references about researches which employ this analytical approach in the performance evaluation of network interfaces are presented.

PROC 3

chipset SB

NB

MC

S ARB B

I/O Bus

Memory Bus

2

MEM

NIC

4 t

1

Input Workload

Maximum Backlog?

Figure 1. Focus of attention Thus, in this paper we focus on the features provided by each analytical approach for modeling the impact on NIC’s buffer requirements (backlog) of different aspects such as (1) input workload, (2) NIC, (3) I/O subsystem (e.g., PCI Bus and arbiter), and memory subsystem (memory controller, memory bus and memory modules). See Figure 1.

3. Existing Approaches In a recent survey of analytical approaches for performance evaluation of network interfaces over the period 2003-2009 conducted by us, see [5], we found that only a few authors have studied analytically the performance of network interfaces and most of reviewed papers use queuing theory for performance evaluation. The approaches commonly used can be broadly classified in queuing models, parallel computation oriented, and protocols offload oriented. In what follows, we describe the most important features provided by these analytical approaches.

3.1. Queuing Models The most popular analytical approach for the performance evaluation of network interfaces that we found in the survey [5] is Queuing Theory. Queuing Theory (QT) and Markov Process (MP) have been covered in several books; see [915]. Here we present a short introduction to QT taken from [16] which summarized the most important issues of the previously cited references and covers the most important queuing systems with a single service center. For queuing networks only some basics are mentioned. As described in [16], QT is mainly seen as a branch of applied probability theory and its applications are in different fields, e.g. communication networks, computer systems, and so forth. The subject of QT can be described as follows: consider a service center and a population of customers, which at some times enter the service center in order to obtain service. It is often the case that the service center can only serve a limited number of customers. If a new customer arrives and the service is exhausted, he enters a waiting line and waits until the service facility becomes available. So we can identify

3.2. Parallel Computation Models Reference [5] shows that parallel computation models such as LogP [17] have been used by a few authors in the performance evaluation of network interfaces. The main parameters of LogP are: L (an upper bound on the latency, or delay, incurred in communicating a message containing a word or small number of words from its source module to its target module), o (the overhead, defined as the length of time that a processor is engaged in the transmission or reception of each message; during this time, the processor cannot perform other operations), g (the gap, defined as the minimum time interval between consecutive message transmissions or consecutive message receptions at a

120

978-1-4244-9557-3/11/$26.00 ©2011 IEEE

processor), and P (the number of processor/memory modules). This model inspired the protocol offload models that we describe in the next section.

EMO emphasizes the performance of the network protocol rather than the parallel algorithm. EMO model allows protocol developers to consider the tradeoffs and specifics associated with offloading protocol processing including the reduction in message latency along with benefits associated with reduction in overhead and improvements to throughput. It is to be noted that LAWS and LogP models can be mapped to EMO. The message-oriented nature of EMO along with the emphasis on the communication patterns on a single host allows us to focus on the benefits of offloading protocol processing specifically as a measure of overhead and gap. The variables for this model are as follows: CN (# cycles of protocol processing on NIC), RN (Rate of CPU on NIC in MHz), LNH (Time to move data and control from NIC to Host OS), CH (# cycles of protocol processing on Host), RH (Rate of CPU on Host in MHz), LW (Time to move data and control from network to App), LHA (Time to move data and control from Host to App), LNA (Time to move data and control from NIC to App), CA (# cycles of protocol processing at Application), ONH (# host cycles to move data and control from NIC to Host OS), OHA (# host cycles to move data and control from Host OS to App), and ONA (# host cycles necessary to communicate and move data from NIC to Application). EMO allows us to explore the fundamental cost of any protocol, i.e., its overhead. Due to overhead occurs at the per-message and per-byte level, it allows us to estimate and graphically represent the overhead for various levels of protocol offload. The model graphically represents the protocol processing overhead of some methods for decreasing protocol processing overhead. For example, the authors of [20] compare interrupt coalescing, TCP offload, traditional zero-copy TCP, and splintered TCP. EMO shows graphically the intuitive understanding that offloading techniques will provide the most improved performance as the size of the message increases. In addition, the model graphically represents the latency of some methods for decreasing protocol processing overhead such as interrupt coalescing, TCP offload, traditional zero-copy TCP, and splintered TCP.

3.3. Protocol Offload Models 3.3.1. LAWS. LAWS[18] models fundamental performance properties of transport offload and other techniques for lowoverhead I/O in terms of four key ratios that capture the CPU-intensity of the application and the relative speeds of the host, NIC device, and network path. The model characterizes the potential benefits of transport offload for application throughput as a function of four key ratios. The ratios—Lag ( ), Application ( ), Wire ( ), and Structural ( ) or LAWS—capture speed differences between the host and the network, the CPU-intensity of the application, and structural factors that may eliminate work in the offload case and determine the potential benefit from protocol offload. Higher ratio values diminish the benefit. “Lag ratio” is defined as the ratio of host processing speed to NIC processing speed, “application ratio” as the ratio of normalized application processing to communication processing (i.e., CPU-intensity of the application) and “wire ratio” as the ratio of host saturation bandwidth to raw network bandwidth (i.e., portion of network bandwidth the host can deliver without offload), and “structural ratio” as the ratio of the normalized processing overhead for communication with offload to the overhead without offloads (i.e., portion of normalized overhead that remains in the system, either NIC or host, after offload). The input parameters of LAWS are: o (CPU occupancy for communication overhead per unit of bandwidth, normalized to a reference host), a (CPU occupancy for application processing per unit of bandwidth, normalized to a reference host), X (Occupancy scale factor for host processing), Y (Occupancy scale factor for NIC processing), p (Portion of communication overhead o offloaded to NIC) and B (Bandwidth of the network path). The contribution of [18] is to capture many of the factors governing the effectiveness of TCP/IP offload in terms of simple relationships among the four LAWS ratios. The LAWS analysis is based on constructing different graphs which allow us to explain the effect of application ratio ( ), lag ratio ( ), structural ratio ( ), and wire ratio ( ) in the system throughput. This way, the fundamental limits on the benefits of offload are exposed. In [19], a modified version of LAW is presented. In this work, authors propose some changes in the LAW model by adding three new parameters to the original model in order to obtain tighter results.

4. Real-Time Calculus In addition to the analytical approaches described in the previous section, in this work we analyze the features provided by Real-Time Calculus (RTC). RTC is based on formal methods developed in the context of embedded systems, specifically, to explore the design space exploration of network processor architectures; see [21-25], the evaluation of the electronic control units (ECUs) on the FlexRay bus in the automotive domain [7], and the analysis of different scheduling and arbitration policies of processing and communication resources in CAN networks [6]. In the cited papers, an analytical framework for evaluating design tradeoffs in packet processing architectures is presented and validated by simulation. The

3.3.2. EMO. The extensible message-oriented offload model (EMO), see [20], is a conceptual model that captures the benefits of protocol offload in the context of high performance computing systems. In contrast to the LAWS model, EMO emphasizes communication in terms of messages rather than flows. In contrast to the LogP model,

121

978-1-4244-9557-3/11/$26.00 ©2011 IEEE

framework primary consists of a task and resource model for hardware resources and a calculus (i.e., Real-Time Calculus) which allows to reason about packet streams and their processing. In this work, we consider the mentioned framework in a new context (i.e., the performance evaluation of network interfaces). In our case, the hardware resources that we intend to model are the NIC and system architecture components such as the I/O subsystem (I/O bus and arbiter) and memory subsystem (memory controller, memory bus and memory modules). The task model used in this work represents the different packet processing and communication functions that occur within a network node, for example, packet processing within the NIC, DMA transfers through the PCI bus, data transfers from Host Bridge to memory modules through the memory bus. The resource model captures the information about the available communication capacity of different hardware resources involves in packet transfers and the possible mappings of packet communication functions to these resources. The analytical framework also considers the characteristics of the packet flow entering the system which is specified using their arrival curves. It is worth mentioning that in RTC, the NIC, I/O subsystem and memory subsystem can be modeled as communication resources. Given the architecture of a network node (as illustrated in Figure 1), the calculus associated with the framework can be used to analytically determine properties such as the maximum requirements of memory resources within the NIC experienced by packet flow, taking into consideration the underlying scheduling disciplines at the different resources. In particular, our goal is to evaluate the influence of system architecture components I/O subsystem and memory subsystem in the NIC’s buffer requirements. Real-Time Calculus (developed by Thiele et al. at ETH Zurich) was introduced in [26]. According to its authors, Real-Time Calculus (RTC) establishes a link between three areas, namely Max-Plus Linear System Theory as used for dealing with certain classes of discrete event systems, Network Calculus for establishing time bounds in communication networks, and real-time scheduling. In RTC, the basic network model of a processing resource in the presence of incoming task requests is characterized by a processing resource that receives incoming requests and executes them using the available capacity. To this end, some non-decreasing functions are introduced in RTC.

The upper and lower service curves of a service function C(t) satisfy

u

(Δ),

l

(Δ)



≥0

Using the analytical framework based on RTC, we can compute the maximum backlog experienced by a flow passing through a single resource processing the flow and the case where the flow passes through multiple resources such as the NIC, I/O subsystem, and memory subsystem. Reference [22] describes how the computations can be performed. Thus, if αlf and αuf describe the arrival curves of a flow f in terms communication requests (for example, number of communication cycles) demanded from r. If , lr and ur describe the communication capability of r in terms of the same units (i.e., communication cycles), then, the maximum backlog suffered by packets of flow f at the resource r can be given by the following inequality

A physical interpretation of this inequality can be given as follows: the backlog experienced by packets waiting to be served by r can be bounded by the maximum vertical distance between the curves αuf and lr. As described in [22], if the flow passes through multiple resources (such as the NIC, I/O subsystem and memory subsystem) which have their input lower service curves equal 1l , 2l and 3l , then, an accumulated lower service curve l for serving this flow can be computed. Thus, the backlog experienced by packets from the flow f can be given by In the analytical framework, depending on the context in which these functions are used, the backlog can be computed in term of number of packets, the number of bytes, etc.

5. Discussion In this work we are interested in the capabilities of each analytical approach for modeling the following aspects: Input workload. Link speed, packet arrival timestamp and packet size (fixed and variable). NIC characteristics. Layer 2 processing (CRC calculation), layer 3 processing (IP checksum calculation), layer 4 processing (TCP/UDP checksum calculation, TCP offloading), NIC bandwidth (i.e., NIC-to-PCI transfer rate), and interrupt coalescing. System components characteristics and operation. Here, the I/O bus and the memory subsystem are considered. For these communication components, we are interested in peak

Definition 1 (Arrival and Service Function). An event stream can be described by an arrival function R where R(t) denotes the number of events that have arrived in the interval [0, t). A computing or communication resource can be described by a service function C where C(t) denotes the number of events that could have been served in the interval [0, t). Definition 2 (Arrival and Service Curves). The upper and lower arrival curves αu(Δ), αl(Δ) ∈ ≥0 of an arrival function R(t) satisfy

122

978-1-4244-9557-3/11/$26.00 ©2011 IEEE

Table 1. Comparison of analytical approaches Criteria

RTC

Queuing Theory Average queue length

LogP

LAWS

EMO

NO

NO

NO

NIC Buffer Requirements

Buffer fill level, Maximum backlog

Packet Arrival

Arrival Curves

Poisson Process

NO

NO

NO

YES

NO

NO

NO

NO

Fixed

NO

NO

NO

Mean Service Time, Service Time Distribution Mean rate

NO

p, Y, α

CN , RN ,ONH , ONA

NO

NO

NO

Real Packet Arrival Trace NIC Processing

Fixed and Variable Packet Size Service Curves

NIC Bandwidth

YES

Packet Size

System-Components Service Curves NO L NO LNH, LW, LNA Characteristics Legend: LAWS Parameters: B-Bandwidth of the network path , σ - Wire ratio (Ratio of host saturation bandwidth to raw network bandwidth; portion of network bandwidth the host can deliver without offload), p - Portion of communication overhead o offloaded to NIC, Y - Occupancy scale factor for NIC, α - Lag ratio (Ratio of host processing speed to NIC processing speed). EMO Parameters: CN (# cycles of protocol processing on NIC), RN (Rate of CPU on NIC in MHz), ONH - # host cycles to move data and control from NIC to Host OS), ONA - # host cycles necessary to communicate and move data from NIC to Application, LNH - Time to move data and control from NIC to Host OS, LW - Time to move data and control from network to App, LNA - Time to move data and control from NIC to App.

and average bandwidths provided by them, as well as, occupancy and contention of resources. This way, different I/O bus generation characteristics (PCI, PCI-X, PCI Express) and system memory hierarchy characteristics, e.g., DRAM speed, memory write latency, etc, can be considered. Table 1 summarizes all these issues. Thus, both Queuing Theory and Real-Time Calculus allow us to model queuing systems. Nevertheless, it is to be noted the differences in the scope of each approach. Real-time calculus belongs to the class of so-called deterministic queuing theories. It is deterministic in the sense that hard upper and lower bounds of the performance metrics (such as backlog) are always found. This distinguishes it from the class of probabilistic queuing theories for which this guarantee cannot be provided (in general). Deterministic queuing theories are well-suited for studying hard performance bounds since they ensure that all requirements are met by the system during all the time. In contrast, realtime calculus does not allow us to model the average load of the system, and probabilistic approaches are better suited for this purpose. We consider that contrary to queuing theory where average queue length or the probability of packet being dropped due to buffer overflow (i.e., the probability that the queue will exceed a certain length) can be obtained, using the backlog quantity in real-time calculus, we can evaluate the worst case, i.e., the maximum buffer requirements at a given resource in any time interval of length ∆. Hence, the main contribution of real-time calculus is the analysis of system properties in any time interval of length ∆, nevertheless, we can also obtain the buffer fill level in one time instant t, in this case, we need to compute the vertical deviation between curves R(t) and C(t). Consequently, based on our modeling scope and the comparison of analytical approaches (shown in Table 1), in

the following we show the capabilities of each analytical approach for modeling the input workload, NIC characteristics, as well as system components characteristics and operation. Mainly, the focus of attention will be a comparison between QT and RTC due to these two approaches allows us to evaluate the NIC’s buffer requirements. References found in literature will be used to demonstrate how the input workload, NIC characteristics, as well as system components characteristics and operation are modeled in the context of queuing theory-based studies of network interfaces and other studies. The approaches used in these studies are compared and contrasted with the concepts of real-time calculus presented in Section 4. Input workload. With regard to modeling the packet arrival, in many analytical studies of network interfaces based on queuing theory it is assumed that the network traffic follows a Poisson process [27], i.e., the packet inter-arrival times are exponentially distributed and the packet sizes are fixed. In contrast to the analytical studies previously cited, using arrival curves in RTC any kind of traffic pattern (periodic, Poisson, bursty, etc) can be modeled. In addition, we can construct arrival curves from realistic Ethernet packet traces and different packet sizes (fixed or variable) can be modeled. NIC characteristics. In [28], packet processing within the NIC is evaluated using a given latency value which represents the minimum hardware latency per transaction for the offload engine is used in the analytical analysis. In RTC, NIC processing (e.g., layers 1/2/3/4 processing or application offloading such as the implementation of IDS on network processor based NICs[29]) and interrupt coalescing period are modeled using a pure delay, i.e., we consider the overhead of these operations defined as the length of time that the NIC is occupied in the task processing without transferring any data. By using service curves in RTC, this

123

978-1-4244-9557-3/11/$26.00 ©2011 IEEE

The Terabits Challenge (in conjunction with the 25th IEEE INFOCOM) Barcelona, Spain, 2006. [9] L. Kleinrock, Theory, Volume 1, Queueing Systems: WileyInterscience, 1975. [10] R. Nelson, Probability, Stochastic Processes, and Queueing Theory – The Mathematics of Computer Performance Modeling: Springer Verlag, 1995. [11] A. O. Allen, Probability, Statistics, and Queueing Theory – With Computer Science Applications: Computer Science and Applied Mathematics. Academic Press, New York, 1978. [12] G. Bolch, S. Greiner, H. d. Meer, and K. S. Trivedi, Queueing Networks and Markov Chains – Modeling and Performance Evaluation with Computer Science Applications: John Wiley and Sons, New York, 1998. [13] L. Kleinrock, Queueing Systems – Volume 2: Computer Applications, volume 2: John Wiley and Sons, New York, 1976. [14] S. M. Ross, A First Course In Probability: Macmillan, fourth edition, 1994. [15] W. J. Stewart, Introduction to the Numerical Solution of Markov Chains: Princeton University Press, Princeton, New Jersey, 1994. [16] A. Willig, "A Short Introduction to Queueing Theory," 1999. [17] D. Culler, R. Karp, D. Patterson, A. Sahay, K. E. Schauser, E. Santos, R. Subramonian, and T. v. Eicken, "LogP: towards a realistic model of parallel computation," SIGPLAN Not., vol. 28, pp. 1-12, 1993. [18] P. Shivam and J. S. Chase, "On the elusive benefits of protocol offload," in Proceedings of the ACM SIGCOMM workshop on NetworkI/O convergence: experience, lessons, implications Karlsruhe, Germany: ACM, 2003. [19] A. Ortiz, J. Ortega, A. F. Díaz, P. Cascón, and A. Prieto, "Protocol offload analysis by simulation," J. Syst. Archit., vol. 55, pp. 25-42, 2009. [20] P. Gilfeather, "Modeling Protocol Offload for Message-oriented Communication," in Proceedings of the 2005 IEEE International Conference on Cluster Computing (Cluster 2005). vol. 0, A. B. Maccabe, Ed.: IEEE Computer Society, 2005, pp. 1-10. [21] L. Thiele, S. Chakraborty, M. Gries, A. Maxiaguine, and J. Greutert, "Embedded Software in Network Processors - Models and Algorithms," in Proceedings of the First International Workshop on Embedded Software: Springer-Verlag, 2001. [22] S. Chakraborty, S. Künzli, L. Thiele, A. Herkersdorf, and P. Sagmeister, "Performance evaluation of network processor architectures: combining simulation with analytical estimation," Comput. Netw., vol. 41, pp. 641-665, 2003. [23] L. Thiele, S. Chakraborty, M. Gries, and S. Künzli, "Design Space Exploration of Network Processor Architectures," Network Processor Design: Issues and Practices, vol. 1, 2002. [24] L. Thiele, S. Chakraborty, M. Gries, and S. Künzli, "A framework for evaluating design tradeoffs in packet processing architectures," in Proceedings of the 39th conference on Design automation New Orleans, Louisiana, USA: ACM, 2002. [25] S. Chakraborty, S. Kunzli, and L. Thiele, "A General Framework for Analysing System Properties in Platform-Based Embedded System Designs," in Proceedings of the conference on Design, Automation and Test in Europe - Volume 1: IEEE Computer Society, 2003. [26] L. Thiele, S. Chakraborty, and M. Naedele, "Real-Time Calculus For Scheduling Hard Real-Time Systems," in Proceedings of the 2000 IEEE International Symposium on Circuits and Systems (ISCAS 2000). vol. 4 Geneva, Switzerland, 2000, pp. 101-104. [27] K. Salah and K. El-Badawi, "Evaluating System Performance in Gigabit Networks," in Proceedings of the 28th Annual IEEE International Conference on Local Computer Networks: IEEE Computer Society, 2003. [28] K. Kant, "TCP Offload Performance for Front-End Servers," in Proc. of IEEE Global Telecommunications Conference (GLOBECOM 03): IEEE Press, 2003, pp. 3242-3247. [29] P. Cascón, J. Ortega, A. F. Díaz, and I. Rojas, "Assessing the performance of an offloaded ids on network processors," in PDPTA'09 The 2009 International Conference on Parallel and Distributed Processing Techniques and Applications., 2009.

situation can be modeled adequately. Another issue to consider is the NIC bandwidth. It can be affected by firmware-level latency and on-chip memory access latency. From the analytical point of view, using service curves in RTC we can model components which provide different bandwidths, for example, different values of the NIC-to-PCI bandwidth. System components characteristics and operation. Due to I/O bus operation (arbitration, burst size, wait cycles), actual data transfer through the PCI bus occur during certain time intervals and the achieved bandwidth is lower than the theoretical bus bandwidth. Similarly, memory bus and memory bank conflicts affect the performance of the memory subsystem. Contrary to queuing models which do not allow us to evaluate the impact of system components characteristics and operation in the system performance, by using service curves in real-time calculus, the available communication capacity of resources can be modeled.

6. Conclusion In this paper, a different approach (i.e., Real-Time Calculus) for modeling the NIC’s buffer requirements in a network node was presented. In particular, the advantages of RTC with respect to other existing analytical approaches commonly used for the performance evaluation of network interfaces for LAN networks were exposed. Based on the results of this comparison, we consider that RTC is suitable for evaluating NIC’s buffer requirement. In future works, a case study on modeling a network interface using Real-Time Calculus will be presented. 7. References [1] K. Skadron, M. Martonosi, D. I. August, M. D. Hill, D. J. Lilja, and V. S. Pai, "Challenges in Computer Architecture Evaluation," Computer, vol. 36, pp. 30-36, 2003. [2] J. J. Yi, L. Eeckhout, D. J. Lilja, B. Calder, L. K. John, and J. E. Smith, "The Future of Simulation: A Field of Dreams," Computer, vol. 39, pp. 22-29, 2006. [3] L. Thiele, "Performance analysis of distributed embedded systems," in Proceedings of the 7th ACM & IEEE international conference on Embedded software Salzburg, Austria: ACM, 2007. [4] D. J. Lilja, Measuring computer performance: a practitioner's guide: Cambridge University Press, 2000. [5] G. R. Garay, "A Survey of Analytical Modeling of Network Interfaces in the Era of the 10 Gigabit Ethernet," in Proceedings of the 6th IEEE International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE09) (Formerly known as ICEEE), Toluca, Mexico, 2009, pp. 484-489. [6] D. B. Chokshi and P. Bhaduri, "Modeling Fixed Priority NonPreemptive Scheduling with Real-Time Calculus," in Proceedings of the 2008 14th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications: IEEE Computer Society, 2008. [7] D. B. Chokshi and P. Bhaduri, "Performance analysis of FlexRaybased systems using real-time calculus, revisited," in Proceedings of the 2010 ACM Symposium on Applied Computing Sierre, Switzerland: ACM, 2010. [8] A. Banerjee, W.-C. Feng, D. Ghosal, and B. Mukherjee, "End-system Performance Aware Transport over Optical Circuit-Switched Connections," in IEEE INFOCOM High-Speed Networking Workshop:

124

978-1-4244-9557-3/11/$26.00 ©2011 IEEE

Comparing Real-Time Calculus with the Existing ...

moves data, generates communication events, provide protection .... The LAWS analysis is based on ... and the analysis of different scheduling and arbitration.

770KB Sizes 1 Downloads 164 Views

Recommend Documents

Comparing Google Consumer Surveys to Existing Probability and Non ...
Google Consumer Surveys also maintains a mobile application for Android ... With Consumer Surveys, researchers create and run surveys of up to 10 questions.

Comparing Google Consumer Surveys to Existing Probability and Non ...
Internet panel and Google Consumer Surveys against several media consumption ... to Internet-based surveying in the last 10 years. ... these survey questions on publisher websites and answer questions in order to obtain access to the ..... pewInterne

Comparing Google Consumer Surveys to Existing Probability and Non ...
The benchmarks measured Video on Demand. (VoD), Digital Video Recorder (DVR) and satellite dish usage in American households. Four health benchmarks were also measured against responses drawn from the Consumer Surveys respondents. Large government su

Comparing Google Consumer Surveys to Existing Probability and Non ...
Mar 19, 2012 - panel and Google Consumer Surveys against several media consumption ... these survey questions on publisher websites and answer ..... pewInternet.org/Reports/2011/Teens-and-social-media/Methodology/Survey.aspx.

Realtime HTML5 Multiplayer Games with Node.js - GitHub
○When writing your game no mental model shift ... Switching between different mental models be it java or python or a C++ .... Senior Applications Developer.

Trade and the Environment with Pre-existing Subsidies ...
or state owned banks (modeled as an interest rate subsidy) and receive direct .... in a competitive equilibrium only the firm with the highest TFP operates. 5 ...

Learn to Write the Realtime Web - GitHub
multiplayer game demo to show offto the company again in another tech talk. ... the native web server I showed, but comes with a lot of powerful features .... bar(10); bar has access to x local argument variable, tmp locally declared variable ..... T

Continuation of existing ad
May 18, 2014 - education for a period of 10 years in the State of Andhra Pradesh and ... The Principal Secretary to Govt., Health, Medical & Family Welfare ...

Designing Metadata with Existing Application Ontologies
domain [10], and task [9] ontologies exist, and quite a few application ontologies .... which describe the social network of friends.13 Graph (c) shows Employee ...

Comparing India and the West
congruent with the insight we have about human beings: when a person .... around asking questions about eating beef, wearing bindi, worshipping the Shiva ...

ADOW-realtime-reading-2017.pdf
September-November 2017 TheTenthKnot.net. SEPTEMBER. OCTOBER. Page 1 of 1. ADOW-realtime-reading-2017.pdf. ADOW-realtime-reading-2017.pdf.

comparing ratios
Unit 1 – Day 4. Monday, September 9. Converting between fractions, decimals, and percents. Fraction (in lowest terms). Decimal. Percent. 4. 12. 0.54. 120%. 9. 3. 0.4. 8%. My summary to remind myself how to convert between fractions, percents, and d

Comparing Expert-Based Science With Local Ecological Knowledge ...
tangible examples of its application to wildlife management are ... collecting LEK in a wildlife management context is ... K1A 0H3,. 2Canadian Wildlife Service ...