Complexity Measurements of the Inter-Domain Management System Design Ognjen Prnjat, Lionel Sacks University College London, Torrington Place, London WC1E 7JE, England phone: +44 20 76793946; email: {oprnjat | lsacks}@ee.ucl.ac.uk Abstract Current use of software metrics in the industry focuses on the cost and effort estimation, while some research was carried out in the direction of their use as fault indicators. Empirical studies in software measurement are scarce, especially in the realm of object-oriented metrics, while there is no record of management system assessment using these metrics. In this paper we discuss an approach to using established object-oriented software metrics as complexity/coupling and thus risk indicators early in the system development lifecycle. Further, we subject a mediumscale inter-domain network and service management system, developed in UML, to the metric assessment, and present an analysis of these measurements. This system was developed in an European Commission-sponsored ACTS research project TRUMPET. Results indicate that the highest level of complexity, and thus also risk, is exhibited at major interconnection points between autonomous management domains. Moreover, the results imply a strong ordinal correlation between the metrics.

1

Introduction

Telecoms management systems are complex distributed software systems with many dependencies. Designing, implementing and deploying such systems holds finite risk in terms of the impact of their stability on the overall stability of the underlying managed networks [1]. Ability to highlight and remove potential risk areas in the management system’s operation early in the development lifecycle would thus be greatly beneficial. Here, we discuss an approach to assessing complexity and coupling of the system classes using the well-established object-oriented metrics, with the aim of pin-pointing potential risk areas early in the design. We select a set of seven established metrics, and describe how these can be deployed early in the development lifecycle. We illustrate our approach with the case study of the ACTS project TRUMPET inter-domain service management system, and present complexity/coupling measurements. Considering the shortfall of the empirical metric studies, and no indication of previous management system assessment using metrics, we perceive the case study as a general contribution to the field. First, we introduce the state of the art in software metrics. Then, we present our candidate early-lifecycle metric suite. Next, we present the TRUMPET interdomain management system: we discuss the modelling approach and present the details of the autonomous domains. Finally, we present the results of the assessment of the TRUMPET system with our metric suite, and discuss the outcome and implications.

2

Software measurement background

Software measurement is a branch of software science dealing with measurement of internal and external attributes of software. Internal attributes are measured only in terms of the entity under observation, and they are measured directly, i.e., independently [2]. External attributes are measured in terms of how the entity relates to its environment, and they are measured indirectly - i.e., measures of other attributes must exist so as to obtain the measure of an external attribute. Software measurement is rarely applied in the industry: only 1-2 % of software organisations use metrics in the development process [3]. Applications in the industry focus on cost, productivity and effort estimation [4]. A set of metrics distinct from these process-oriented estimation metrics focuses on measuring the internal structure of software. These aim to capture the complexity of software modules and their dependencies. A number of pre-objectoriented complexity measures exist [5]. With evolution of object-oriented (OO) design, a number of new complexity metrics emerged. The old metrics are not applicable to the OO paradigm, where the data and algorithms are bound together in a class, and a software program is a number of collaborating objects. OO structural complexity metrics are presumed to be collectable early in the development [6] [7], from analysis and design documents developed through a notation such as OMT [8] or Unified Modelling Language [9]. A number of OO metrics exist [10]; in the following we list the most important ones. Inheritance complexity is measured using Depth of Inheritance Tree (DIT) [11] and Number of Children (NOC) [11] metrics. Complexity of the inter-class relationships can be measured using the number of relationships [12] metric. Stand-alone class complexity is assessed using the Weighted Methods per Class metric (WMC) [11], and the interface complexity metric [10]. Relationship between classes can also be measured using the Coupling Between Objects (CBO) [11], MessagePassing Coupling (MPC) [12][13] and Response For a Class (RFC) [11] metrics. Whitmire complexity metric [14] quantifies the overall relationship complexity, including associations, aggregations, inheritance and message passing. The Lack of Cohesion of Methods (LCOM) [11] measures the amount of cohesion in a class. The DIT, NOC, CBO, RFC, WMC and LCOM are collectively known as CK (Chidamber-Kemerer) metrics. The CK metrics were suggested for prediction of external process attributes: productivity, re-work and design effort [6]; testing effort and reuse [11]; and

maintenance effort [13]. In these studies, it was indicated that the metrics are effective for the assessment of these economic variables. As such, these metrics are considered as a managerial tool aiding project managers in effort allocation and planning. In [11] these metrics were suggested to identify the design flaws and areas of redesign: however, no details were given. Another family of studies [7][15][16] dealt with another aspect of metrics application: their relationship with fault-proneness. In [16] CK metrics were shown to be better in predicting faultproneness then other existing metrics. The metrics counts were related through a model to the binary value of faultproneness: the class was detected during testing as either with a fault, or not. Measurements were performed on final code; the faults were recorded during testing. There are only a few reported studies dealing with empirical OO measurements [6][11][13][15][16][17]. In all these, CK metrics were collected directly from the code. Apart from one of the three studies in [6], the only other reported study where it was attempted to collect the metrics from the analysis and design documents is [18]. Here, however, most of the metrics proved to be difficult to collect from the design documents without having access to the implementation, with the exception of DIT and NOC. In [7], use of metrics was suggested early in the development lifecycle; however, the source code of a mail system (141 classes) was used for metrics collection.

3

Metric suite

Measuring complexity and coupling early in the telecoms system development lifecycle can be seen as of high importance. Since early years of software engineering [19], to the modern days of OO software [20] an axiom was established: good internal structure implies good external attributes of software. Good software should have low coupling between classes. Coupling is a measure of the degree of dependence between classes: "two classes are coupled if there is evidence that methods defined in one class use methods or instance variables defined in another" [11]. Second, the stand-alone classes of good software should have high cohesion and low internal complexity. Cohesion is the extent to which the class is geared towards performing a coherent task [19]. Internal class complexity could be concerned with either class internal structure (complexity of its control flow) or the complexity of the class as seen from the outside: complexity of its interface. By locating and removing/redesigning points in the design which are highly complex, the telecoms software designer would avoid likely causes of software failure; and would minimise the likely fault propagation by reducing the coupling of the modules/classes. In this context, metrics can be seen as risk indicators early in the development lifecycle [21]. Our early-lifecycle metrics suite [21] consists of seven distinct OO metrics: Depth of Inheritance Tree (DIT), Number of Children (NOC), Coupling Between Objects (CBO), Message-Passing Coupling (MPC), Response For a Class (RFC), interface complexity metric, and Whitmire complexity metric. All are class-level metrics. DIT [11] is depth of inheritance tree: deeper trees constitute greater

design complexity, and the deeper a class is in the inheritance hierarchy, more methods it inherits and more complex it is. NOC [11] is defined as the number of immediate sub-classes subordinated to a class in the class hierarchy: classes with high NOC are more complex they effect more classes. CBO [11] is a count of a number of other classes that a class is coupled to. If a method in class A uses methods or instance variables in class B, then A is coupled to B. CBO is independent of the number of references that A makes to B. High coupling makes a class highly dependent on other classes and thus more vulnerable to error propagation and less reliable. MPC [13] is, in contrast to CBO, dependent on the number of references that class A makes to class B. MPC is defined as the number of send statements in a class. Large MPC implies large dependency on other classes: classes with high MPC have higher coupling and pose more risk to system operation. RFC [11] is a set of all methods that can be invoked in a response to a message received by an object of a class, i.e., the number of methods potentially available to the class. Large RFC indicates large complexity: tracing of dependencies becomes difficult, and coupling paths more intricate. The interface complexity [10] assesses the stand-alone complexity of the class. Interface can be specified as a set of services: queries and commands. Interface complexity is the sum of weighted commands and queries: weight factor is number of arguments required for the query/command. The larger the interface size, the more difficult it is to select and correctly use the service provided by the class. Whitmire complexity [14] assesses total class coupling within the design. It is a four-dimensional metric: dimensions are sets of inheritance, association, aggregation and message passing arrows related to the particular class. The magnitudes in each dimension are given by the cardinality of the relevant set of arrows. This set of OO metrics can be calculated from analysis and design documents. DIT and NOC metrics can be calculated early in the lifecycle, considering the UML class diagrams depicting the inheritance hierarchy. The CBO, MPC, RFC and Whitmire complexity can be calculated once the interrelationships between classes are identified: UML class diagrams depicting associations and aggregations must be available, as well as the collaboration diagrams illustrating the message exchange between the collaborating objects. Interface complexity metric can be calculated once the stand-alone class interface has been specified, including the full set of parameters. These metrics were chosen as a representative set for the assessment of the analysis/design complexity of a system. We believe these metrics effectively capture the complexity of the design, tackling both the stand-alone class complexity, as well as different forms of inter-class coupling, ranging from inheritance coupling, through general relationship coupling such as association and aggregation, to message-passing oriented coupling which reflects the amount of interaction on the detailed level. Also, this metric set is representative because it includes the key metrics suggested in research. We omitted the stand-alone class cohesion measures (LCOM [11]): those depend on the low-level class internal detail, available only through code-level information. These measures are

thus not useful when considering the analysis and design information, which we are proposing to measure. In the following, we present the TRUMPET interdomain management system that was used as a case study for complexity/coupling measurements.

4

TRUMPET management system

ACTS project TRUMPET dealt with securing the Telecoms Management Network (TMN) X interfaces. The TRUMPET management system aimed to present a realistic platform to implement the security policies on. This service management architecture (Figure 1) [22][23] was developed using an original development methodology [22][24] and it involves administratively separate players (marked as grey boxes): two (or more) Public Network Operators (PNOs), a Value Added Service Provider (VASP), and a number of customers at various sites - Customer Premises Networks (CPNs). A set of trials was established to evaluate the architecture in real operational environments. Each of the players has an independent management system under its control: these collaborate to provide and maintain the ATM connections between two customers/end users. CPN has contract with the VASP regarding the use of service by end-users. VASP provides network connectivity to customers by utilising the resources of one or more PNOs. VASP is responsible for the service offered, and allows customers to create, modify and delete connections, thus providing the Virtual Private Network (VPN) service. PNOs provide VASP with the infrastructure and connectivity capabilities, by operating basic switching and transmission. VASP

Customer2 / end-user VASP OS

Customer1 / end-user

Xuser’’

Xuser’’ Xuser’

Xuser’ CPN OS

CPN OS

PNO OS

Xcoop

PNO OS

Customer Premises Network 1

Customer Premises Network 1

PNO A

PNO B

Figure 1 - Management architecture [22]

Development approach [22][24] combined the TMN architecture models, the ODP Viewpoint framework [25], and the UML [9] notation. Methodology was use-case driven. The only strong architectural requirement on the system was the full-featured TMN X interface between the VASP and the PNO: the security policies were applied there. The rest of the system did not need to adhere to any particular standards: however, the TMN architecture was important since the system is essentially targeted at largescale commercial public network management. ODP was used to structure development information, which was depicted through UML diagrams. ODP provides a general framework that distributed systems aiming to operate in the multi-provider environment should conform to. Bases of ODP are five viewpoints (enterprise, information, computational, engineering and technology), which allow

different participants in development to observe the system from different perspectives. ODP does not prescribe any particular notation. However, since the description of the same component can exist in different viewpoints, there is a requirement that these specifications are consistent. Also, components in each viewpoint must be clearly identified and related to each other. Thus, it is favourable to use one language for all viewpoints. UML can be used to describe the ODP enterprise, computational, information and to some degree engineering viewpoints [22][24]. UML provides a set of diagrams, has rich semantics and defines how different kinds of diagrams should relate to each other. Thus, it offers a possibility for providing for consistency and coherence of ODP specifications [24]. Enterprise viewpoint is described with the UML use case diagram that depicts the functionality of the system via management scenarios; and package diagram of main actors in autonomous domains. Classes in the information viewpoint are described using the class diagrams, depicting the structure of classes and their relationships, including inheritance (parent-child), associations (general) and aggregations (containment relationships). Computational viewpoint describes how management functions, identified via enterprise use cases, are performed. Functions are described in terms of computational objects and activities (interactions between objects). Components identified in enterprise viewpoint are mapped to computational objects which provide a coarse grain view of the system. Each component is then decomposed in a set of computational objects representing the detailed model. UML class diagrams were used to describe the structure of computational objects, their relationships and interfaces. Class diagrams also describe stand-alone computational objects’ external interfaces. Objects' interaction is described in terms of a client-server based operation invocation depicted via UML collaboration and sequence diagrams. The engineering viewpoint was elaborated through the UML component and deployment diagrams: the component diagram shows the organisations and dependencies among runtime modules, while the deployment diagram shows how components and objects are distributed around the system. Design and implementation details of the domains are as follows. Design on the customer premises focuses on the basic interface to the VASP for the purpose of service provision, as well as on the automation of the interactions between the VASP and other elements within the customers premises: local network management, local database management of accounts or usage, etc. VASP needs to support the provision of connections for a number of customers, by using the resources of one or more PNOs. This is achieved by designing the three main components: Customer Server, Control Server (or VASPVPN-Manager), and the VASP Management Information Base (MIB). Customer Server provides for customer access to the VASP, and the VASP_VPN_Manager provides for VASP access to the PNOs. MIB-like component supports the required data models: it contains managed objects that hold all the information about the resources that VASP manages. Main managed object class is the VASP VPConnection: these objects are created to

represent an end-end connection between two CPNs, and their attributes represent connection information (bandwidth, schedule, Quality of Service). Interface between VASP and PNO service management layer (PNO_Conn_Manager) components is the Xuser interface. The information model for the Virtual Path (VP) connection management, supported at each PNO site, is shown in Figure 2. VP Service Provider

1

administrativeAddress : AdministrativeAddress

1 has 0..* VP User userAdminAddress : AdministrativeAddress userId : Identifier userCategory : UserCategory

maintains

1

1 has

has

Mean Median Standard deviation Minimum Maximum

0..*

0..*

Access Point

VP Connection

accessPointId : AccessPointId e164Address : E164Address connectionPtr : ConnectionPtr qosLimitsSeq : QoSLimitsSequence

reservationDuration : Duration routingCriteria : RoutingCriteria connectionId : ConnectionId accessPointPtr : AccessPointPtr listOfDestAddr : ListOfDestAddr

0..*

the PNO domain, on the service and network levels. Next is the VASP_Customer_Server, the computational object class in the VASP domain operating at the interface with the CPN. Following is a set of purely information object classes representing the key data entities in both VASP and PNO domains. For Whitmire complexity metric, we present: basic summary statistics (Table 1); histogram of the distribution of metric values and the boxplot of metrics values (Figure 4). Boxplot depicts the centre and variation of the data set, and marks the outliers (with a star). It shows the skewness of data, by the position of the median (value m for which half the values are smaller then m and half are bigger) in the box, and by the length of the tail. Here, median is off-set of the centre and the tail lengths are unequal (left is non-existent); data set is strongly skewed to the left. This metric identifies two X interface computational object classes as most complex.

1..* 1..*

3.531 2 5.187 0 27

Table 1 - Whitmire complexity statistics

references

Figure 2 - PNO information model [22]

5

Frequency

10

Results

This section presents the complexity measurements of the TRUMPET system. The metrics data source - the design documentation [22] - is 150 pages of text and UML diagrams. We assessed only the classes in the VASP and the PNO domains, since the documentation for the CPN was incomplete. The design of the VASP and the PNO consisted of 32 classes. Each of the metrics forming the metric suite, apart from DIT and NOC, was applied within an ODP viewpoint and on UML diagrams describing it. The designers of the TRUMPET system did not use inheritance at all. Thus, DIT and NOC metrics are not applicable. The metrics data was collected manually. Metric Value 35 CBO MPC Whitmire RFC Interface

30 25 20 15 10 5

Interface 0

0 0

10

W hitmire complexity

20

30

0

10

20

30

Whitmire complexity

Figure 4 - Whitmire histogram (left) and boxplot (right)

CBO values are low (only a few classes higher then 2), indicating that the interconnection between classes is kept at a reasonable level. Highest CBO is exhibited by the X interface computational objects in both domains. Most of the classes have low MPC counts: MPC values follow the distribution of the Whitmire complexity and CBO, with both X interface classes in the two domains distinctly standing out. MPC counts of the information object classes are 0: these are communication sinks. RFC follows the same distribution for computational object classes: however, now the most important information object classes exhibit a complexity increase. This is since RFC measures the methods available to the class, which even in the case of a moderately interacting class can be high due to the high number of methods within a class itself. Interface complexity follows the CBO and MPC for the computational object classes: however, now the most important information object classes exhibit a complexity increase as compared to the CBO and MPC, the rationale for this following that of the RFC.

29

31

25

CBO 27

21

MPC 23

17

Class Number

19

13

15

9

11

5

7

1

3

RFC Whitmire

5

Figure 3 - Metrics distributions per class

Figure 3 depicts the metrics distributions per class. On average, the most complex classes are the VASP_VPN_Manager and the PNO_Conn_Manager. These are the main computational object classes operating on the X interface between VASP and PNO domains. Next are the PNO_VP_Conn_Handler, and the PNO_Nw_Manager, the computational object classes in

6

Discussion

Metrics data exhibits distinctive features, including non-normal distributions and multicollinearity. Thus, we conducted a statistical analysis of measurements, so as to trace interesting trends in data. This is followed by a more general discussion of the experiment. The typical metrics distribution (for all metrics) is leftskewed, with a few outliers distinctly standing out. There is also a strong relationship between the whole set of

metrics: multicollinearity. Thus, we concentrated on investigating the associations between the metrics. First, we investigated the linear association between metrics, by calculating r, the linear (Pearson) correlation coefficient between each pair of metrics. This coefficient is a descriptive measure of the straight-line relationship between two variables. This is the typical approach to testing for metrics interrelationships, commonly reported in literature [6] [13]. The linear coefficients for TRUMPET measurements are high for each pair of metrics. The majority of coefficients are higher than 0.8. Similar result was reported in [6], where the coefficients were also mostly higher than 0.8. These results should indicate that the regression equations linking each pair of metrics are highly suitable for making predictions of one metric on the basis of the other. However, in scatter plots data points are very weakly scattered about a straight line. Also, regression is sensitive to the presence of outliers, which are a distinctive feature (legitimate data points) of metrics distributions. Finally, two basic assumptions for regression inferences, based on residual plots, are not met. Thus, we conclude that no one metric is useful as a linear predictor of the other. The magnitude ordering of one metric does not directly imply the linear magnitude ordering of the other, and the intervals between the two values of one metric are not proportional to the intervals between the values of the other metric, for any pair of adjacent classes that these measurements refer to. Since the distributions are non-normal, the basics statistics such as mean and the use of parametric statistical methods do not accurately capture the distribution features and the interrelationships between the distributions. In this case, the use of robust statistics (such as median and ranks) and nonparametric statistical methods is advocated [26][27]. Assumptions for the use of nonparametric statistical methods are less restrictive then for the parametric methods (which usually assume normal distributions, equality of variances across samples, linearity, etc.). The nonparametric methods are as rigorous, and allow the analysis of order relations [26]. Nonparametric method equivalent to linear correlation is the calculation of the rank (Spearman's) correlation coefficient between paired values (from same classes) of the two metrics. This procedure lowers the metrics scale from interval to ordinal, avoiding the magnitude-related relationships between metrics [26]. Here, ranks of metrics are used, rather than the metrics values themselves. This loosens up assumptions about data relationships (linearity), while still giving a valid measure: ranking of classes according to the metrics value. Rank coefficients are shown in Table 2. All coefficients are significant at the 95% confidence level. CBO MPC RFC Int. Whit. 1 CBO 0.995 1 MPC 0.835 0.837 1 RFC 0.779 0.775 0.975 1 Int. 0.712 0.719 0.555 0.532 1 Whit. Table 2 - Metrics rank correlation coefficients

Coefficients are high, indicating a strong ranking relationship between the metrics. Thus there is enough

statistical evidence to say that each metric could be useful in predicting the ranks of the other metrics (depending on the value of the rank coefficient). Exceptionally high (>0.95) rank correlation is exhibited between CBO and MPC, and between RFC and interface complexity. This indicates that any combination of: CBO or MPC; RFC or interface complexity; and Whitmire complexity metrics could form a set of three complementary metrics. We consider the TRUMPET metrics experiment to be a strong empirical research contribution. As discussed before, there are few reported studies dealing with the practical system evaluation using the OO metrics. In the majority of reported studies, the metrics (CK) were collected from code, despite the fact that they were originally envisaged [11], and later advertised [7], as earlier lifecycle measures. The one of the only two reported studies involving the design documents [18] reported problems with metrics collection - only DIT and NOC were collected successfully. Moreover, metrics were never applied to management systems. Our experiment is the first to assess a management system using metrics. Although the system originates from a research project, we consider it as representative since a number of professional organisations participated in the design. The system was chosen due to the public availability of design documentation, which is not the case in the industrial organisations. The experiment also demonstrated that the metrics collection from the analysis and design documents is possible: we demonstrated that through thorough use of established diagrammatic techniques such as UML metrics collection becomes easy. A number of observations can be made concerning the results. First, the system did not contain any inheritance. This might be due to design team not being accustomed to the OO design philosophy, or it could be a reflection of the size of the system (small-scale). Low inheritance measures (DIT and NOC) were also reported in a number of earlier studies [6] [16] [18]. Secondly, the CBO counts appear distinctly low, as compared to other metrics (RFC, MPC, Whitmire) which also depict coupling between objects. This is due to the fact that these other measures include the amount of collaboration between classes. However, the CBO counts are still lower than in previous reported studies, which can be due either to the size of our system, or the fact that the designers, using the design heuristics, minimised the number of collaborating classes. Next, all the metrics have a typical non-normal distribution: left-skewed, with a few distinct outliers. Also, metrics are highly correlated in terms or ranking of the classes. Particularly high rank correlations are exhibited between CBO and MPC; and RFC and interface complexity. Finally, the majority (61%) of classes have low metric counts (0-2). All of the metrics thus identify a subset of classes as distinct from others in terms of complexity. These are the main computational object classes in the system, performing a manger-control task. Also, the highly complex classes can be the most important information object classes in the system. The most complex classes singled out in our case study were the classes operating at the interfaces between administrative domains. These were singled out by all of the metrics making up our metrics suite.

These measurements indicate that the highest risk for the systems' operation is exhibited at the major interconnection points between autonomous management systems. This argument in the risk context was already pointed out [28] - however, it was not empirically justified. During development and testing of TRUMPET software, no detailed testing or failure data was collected. However, the two computational object classes operating at the X interface between the VASP and the PNO domains - that were singled out as the most complex in the design - did prove to be the most difficult to implement and test, and were the main source of pitfalls during the project trials [29]. The metric suite in this experiment was used as an analysis tool for diagnostics of the most complex classes. These classes are then labelled as the classes with highest risk. The re-design was not applied during the system development, because the system was assessed post-facto.

7

Conclusion

Here we suggested the use of established OO software metrics for assessment of the system design early in the development lifecycle. Complexity and coupling measurements would give an early indication of the potential risk areas in the system design. This information would be valuable in the context of modern, distributed management systems, whose correct functioning is of paramount importance for the operations and maintenance of the underlying managed telecoms network. We used seven existing software measures to form a metric suite which yields the complexity/coupling measurements of the system classes. We demonstrated the usefulness and applicability of this approach through a case study of the TRUMPET management system. Apart from being one of the rare empirical studies of OO measurement, this experiment is the first one to assess a telecoms management system design using the OO metrics. The study empirically demonstrated that the highest complexity, and thus also risk, for the management systems' operation is exhibited at the interconnection points between administrative domains. The experiment also assessed the nature of the interrelationship between the individual metrics within the metrics suite, uncovering a strong ordinal relationship between the metrics.

8

References

[1] O. Prnjat, L. Sacks, "Integrity Methodology for Interoperable Environments", IEEE Comms., Special Issue on Network Interoperability, Vol. 37, No. 5, pp. 126-139, May 1999. [2] N. Fenton, “Software Measurement”, IEEE Transactions on Software Eng., Vol. 20, No. 3, March 1994. [3] E. Yourdon, "Rise and Resurrection of the American Programmer", Prentice-Hall, 1996. [4] E. F. Weller, "Using Metrics to Manage Software Projects", IEEE Computer, Vol. 27, No. 9, pp. 27-33, 1994. [5] M. Shepperd, “Software Engineering Metrics, Vol. 1”, McGraw-Hill, 1993. [6] S. R. Chidamber, D. P. Darcy, C. F. Kemerer, “Managerial Use of Metrics for Object-Oriented Software”, IEEE Trans. on Software Eng., Vol. 24, No. 8, August 1998.

[7] T. Kamiya, S. Kusumoto, K. Inoue, "Prediction of Faultproneness at Early Phase in Object-Oriented Development", Proceedings of the 2nd IEEE ISORC '99, pp. 253-258, 1999. [8] J. Rumbaugh et. al., “Object-Oriented Modelling and Design”, Prentice-Hall, 1991. [9] Rational Software Corporation, Unified Modelling Language, http://www.rational.com/ [10] B. Hendson-Sellers, “Object-Oriented Metrics, Measures of Complexity”, Prentice-Hall, 1996. [11] S. R. Chidamber, C. F. Kemerer, “A Metrics Suite for Object-Oriented Design”, IEEE Transactions on Software Engineering, Vol. 20, No. 6, pp. 476-493, 1994. [12] M. Lorenz, J. Kidd, “Object-Oriented Software Metrics”, Prentice-Hall, 1994. [13] W. Li, S. Henry, “Object-Oriented Metrics that Predict Maintainability”, Journal of Systems and Software, Vol. 23, pp. 111-122, 1993. [14] S. A. Whitmire, “Object-Oriented Design Measurement”, John Wiley and Sons, 1997. [15] L. C. Briand, J. Daly, V. Porter, J. Wust, "A Comprehensive Empirical Validation of Design Measures for Object-Oriented Systems", Proc. of the 5th International Software Metrics Symposium, pp. 246-257, 1998. [16] V. R. Basili, L. C. Briand, W. L. Melo, “A Validation of Object-Oriented Design Metrics as Quality Indicators”, IEEE Transactions on Software Eng., Vol. 22, pp. 751-761, 1996. [17] C. Kirsopp, M. J. Shepperd, S. Webster, "An Empirical Study Into the Use of Measurement to Support OO Design", Proc. of the 6th IEEE Int. Metrics Symposium, IEEE CS, 1999. [18] M. Cartwright, M. Shepperd, “An Empirical Investigation of Object-Oriented Software in Industry”, Technical Report TR96/01, Bournemouth University, 1996. [19] L. Constantine, E. Yourdon, “Structured Design”, PrenticeHall, 1979. [20] E. V. Bernard, “Essays on Object-Oriented Software Engineering”, Prentice-Hall, 1993. [21] O. Prnjat, L. Sacks, "Telecoms System Design Complexity and Risk Reduction Based on System Metrics", Proc. of the European Workshop on Dependable Computing, May 1999. [22] O. Prnjat, L. Sacks (Eds.), "Detailed Component and Scenario Designs", Del. 8, TRUMPET Project, June 1997. [23] L. Sacks, O. Prnjat, et. al, "TRUMPET Service Management Architecture", Proc. of the 2nd International Enterprise Distributed Object Computing Conf., November 98. [24] M. Kande, S. Mazaher, O. Prnjat, L. Sacks, M. Wittig, "Applying UML to Design an Interdomain Service Management Application", Proc. of UML '98 Int. Conference, June 1998. [25] ITU Draft Recommendation X.901-X.904, "Basic Reference Model of Open Distributed Processing". [26] N. F. Schneidewind, "Methodology for Validating Software Metrics", IEEE Trans. on Software Eng., Vol. 18, No. 5, pp. 410-422, May 1992. [27] N. E. Fenton, “Software Metrics - A Rigorous Approach”, Chapman and Hall, 1991. [28] K. Ward, “Impact of Network Interconnection on Network Integrity”, British Telecoms Eng., Vol. 13, pp. 296-303, Jan. 95. [29] O. Prnjat, L. Sacks (Eds.), "Trials and Technology Assessment", Del. 8, TRUMPET Project, December 1998.

Complexity Measurements of the Inter-Domain Management System

Current use of software metrics in the industry focuses on the cost and effort estimation, ..... VPN-Manager), and the VASP Management Information. Base (MIB).

66KB Sizes 0 Downloads 205 Views

Recommend Documents

On the Complexity of System Throughput Derivation for ...
degrees of freedom are enabled in WLAN management for performance optimization in ... achievable system throughput for a given static network setup: namely ...

Low and fixed complexity detection of OFDM system
High Definition Television (HDTV) signals. In OFDM system, a ... proposed detector maintains a reduced and fixed complexity, avoiding the variable nature of the list sphere decoder. (LSD) due to its ... where H denotes the N ×M complex channel matri

Low and fixed complexity detection of OFDM system - International ...
avoiding the variable nature of the list sphere decoder (LSD) due to its dependence on the noise and channel conditions .... being their corresponding partial Euclidean distances di computed and accumulated to the previous level's AED, .... Fig.4 Plo

pdf-1468\global-positioning-system-signals-measurements-and ...
... of the apps below to open or edit this item. pdf-1468\global-positioning-system-signals-measurements-and-performance-by-pratap-misra-per-enge.pdf.

pdf-1889\global-positioning-system-signals-measurements-and ...
Try one of the apps below to open or edit this item. pdf-1889\global-positioning-system-signals-measurements-and-performance-revised-second-edition.pdf.

On the Complexity of Explicit Modal Logics
Specification (CS) for the logic L. Namely, for the logics LP(K), LP(D), LP(T ) .... We describe the algorithm in details for the case of LP(S4) = LP and then point out the .... Of course, if Γ ∩ ∆ = ∅ or ⊥ ∈ Γ then the counter-model in q

The Dynamics of Policy Complexity
policymakers are ideologically extreme, and when legislative frictions impede policy- making. Complexity begets complexity: simple policies remain simple, whereas com- plex policies grow more complex. Patience is not a virtue: farsighted policymakers

the complexity of exchange
population of software objects is instantiated and each agent is given certain internal states ... macrostructure that is at least very difficult to analyse analytically.

Measuring Complexity of Network and Service Management Compone
Telecoms network and service management systems are in their essence complex .... mapping between the façade and the ODP [ODP] viewpoint models are specified. .... ignore all the Database (DB) interface classes, and concentrate on the ...

The Kolmogorov complexity of random reals - ScienceDirect.com
if a real has higher K-degree than a random real then it is random. ...... 96–114 [Extended abstract appeared in: J. Sgall, A. Pultr, P. Kolman (Eds.), Mathematical ...

The Complexity of Abstract Machines
Simulation = approximation of meta-level substitution. Small-Step ⇒ Micro-Step Operational Semantics. Page 8. Outline. Introducing Abstract Machines. Step 0: Fix a Strategy. Step 1: Searching for Redexes. Step 2: Approximating Substitution. Introdu

The Kolmogorov complexity of random reals - ScienceDirect.com
We call the equivalence classes under this measure of relative randomness K-degrees. We give proofs that there is a random real so that lim supn K( n) − K(.

The Complexity of Abstract Machines
tutorial, focusing on the case study of implementing the weak head (call-by-name) strategy, and .... Of course, the design of a reasonable micro-step operational semantics depends much on the strategy ..... Let ai be the length of the segment si.

Multi-terminal electrical transport measurements of molybdenum ...
Dec 22, 2014 - 4KU-KIST Graduate School of Converging Science and Technology, Korea ..... Supplementary Information S2) All samples were obtained by exfoliation .... mechanisms limiting the carrier mobility of MoS2, the Hall mobility ...

Measurements of Lightning Parameters Using ...
Aug 5, 2005 - N = 10, 100% positive. Tornado Warning. 7/03/05 7pm. N = 34, 0% positive. 7/4/05 9 pm. N = 16, 93% positive. Tornado Warning. 7/05/05 4pm. N = 20, 0% positive. 7/06/05 1pm. N = 23, 0% positive. 7/6/05 5 pm. N = 30, 10 % positive. 7/7/05

Raman Thermometry Measurements of Free ...
Contribution from the Department of Chemistry, UniVersity of California, Berkeley, California. 94720, and Chemical Sciences DiVisions, Lawrence Berkeley National Laboratory,. Berkeley, California 94720. Received May 22 ..... measured temperature data

Copy of MEASUREMENTS- By EasyEngineering.net.pdf
Copy of MEASUREMENTS- By EasyEngineering.net.pdf. Copy of MEASUREMENTS- By EasyEngineering.net.pdf. Open. Extract. Open with. Sign In.

Sonographic Measurements of the Normal Liver ...
5 transverse diameter of the spleen;. 6 longitudinal diameter of the spleen;. 7 = ..... Radi- ology 1975, 115:157-161. 14. Kardel. T, Holm. HH, Rasmussen.

pdf of management information system
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. pdf of ...