Software Process and Product Improvement A Historical Perspective Elli Georgiadou Principal Lecturer School of Computing Science Middlesex University London, UK [email protected] Abstract Most software systems can be considered at least partial failures because very few systems meet all their cost, schedule, quality or requirements objectives. Although failures are rarely caused by mysterious causes, these causes are usually discovered post-mortem or only after it is too late to change direction. In recent years the software engineering community has recognised the need to shift from product inspection and correction to process improvement In this paper we present a historical overview of significant attempts to address the software crisis. In particular we trace the development of lifecycle models and information systems development methodologies over the last four decades. Finally we explore the role of measurement and outline current and future work leading towards process and product improvement.

Keywords: software crisis, software failures, methodologies, lifecycles, process improvement, software measurement, metrics

1. Software Crisis: The Symptoms Quality and quality control were the domain of production engineers until the late 60s when the term Software Engineering was coined [Sommerville 2001]. Concerns about late deliveries of software, with low reliability and high maintenance costs have moulded all efforts towards software quality improvement. In order to deal with what became known as the Software Crisis it has been necessary to study the methods of software production as well as the software artefacts themselves. Researchers and practitioners have developed a plethora of lifecycle models, methods, techniques and tools in an attempt to minimise the likelihood of software failure. For over thirty years the Software Engineering community has recognised the necessity to address the Software Crisis. Researchers and practitioners endeavoured to identify ways for improving the productivity and the quality of product. New languages were often believed to have almost magical powers of resolving the crisis. Automated tools, formal methods and more recently object-orientation have been proposed as alternative 'religions' with Software Engineers becoming fervent followers of one or the other approach [Georgiadou 1995c], [Siakas 1997]. -------------------------------------------------------------------page: 1

This quest resulted in the development of thousands of innovative ways to building software, from new technologies to progressive processes and frameworks [Avison 1995], [Jackson 1994], [Jayaratna 1994]. Despite these efforts systems continue to fail with dramatic frequency. A survey of hundreds of corporate software development projects indicated that five out of six software projects are considered unsuccessful [Johnson 1995], and approximately a third of software projects are cancelled. The remaining projects delivered software at almost double the expected budget and time to develop as originally planned.

2. Product and Process Improvement In the early years Software Engineering adopted an end of cycle quality inspection just as the early manufacturing where the quality of the product was achieved through inspections at the end of the production line. Inspections in turn resulted in three categories namely the accepted, the rejected and those products requiring rework. The last two 'heaps' namely the rejects and the reworks gave a measure of the losses which every manufacturer needed to reduce for survival and competitive advantage [Gilb 1993], [Burr 1995]. Software Artefacts or products may also be referred to as the deliverables or outcomes of the software process. These products may be specifications, process models, procedure manuals, coding, test data, test results and so on. Software process is a set of activities that begin with the identification of a need and concludes with the retirement of a product that satisfies the need; or more completely, as a set of activities , methods, practices, and transformations that people use to develop and maintain software and its associated products (e.g., project plans, design documents, code, test cases, user manuals). 2.1 Learning from the Manufacturing Industry The Japanese approach to quality control was founded by W. A. Shewhart who is often referred as the father of modern quality control. His philosophy is based on the motto "the better the quality, the lower the cost". Hagime Karatsu [Logothetis, 1989] explains the manufacturing process as follows: “If it is aimed to produce quality products, there will be great financial benefits. Withdrawal and return of products are reduced. Higher productivity will be achieved because the necessity to stop machines in order to replace materials will be less frequent. This means it is possible to reduce the operation rate. As the manufacturing system itself improves in quality, the cost will be minimized. That will give rise to the company’s reputation and will increase its sales.” In the case of Software Engineering considerable effort was expended in carrying out software testing namely exercising finished code with 'suitable' test data. Despite these efforts systems continued to fail and the community had to seek alternative or

-------------------------------------------------------------------page: 2

complementary methods for minimising the losses in terms of financial costs and more importantly in terms of loss of human life. Additionally, legacy systems require constant maintenance: corrective to address failures in use, adaptive to accommodate changes in requirements and perfective to improve performance [Pressman 2000]. Through the study of performance of existing systems it is hoped to improve the software process and in turn improve the products of this process. 2.2 Lifecycle Models as Process Models Many lifecycle models, information systems development methodologies and tools have been proposed, developed and adopted to act as management tools. Software Lifecycles models are paradigms for guiding the development process, aiding the planning, monitoring and controlling of projects. The nature of the problem, the methods and tools, the controls and the deliverables formulate the paradigm. An extensive study of fourteen (14) representative lifecycle models was presented in [Georgiadou 1995b] with specific emphasis on the position/timing of the Testing sub-process in each model. Fig. 1 depicts the approximate time of introduction of these models since 1970.

1970

Waterfall 1975

NCC

V-model Prototyping W-model 1980 1985 1990

Waterfall with Feedback

X-model OO-models 1995 2000

??

RAD Client-server

Fig. 1 - Lifecycle Models since 1970

2.2.1 Sequential Models The waterfall model or classic life cycle (Fig.2a) has been the most widely used lifecycle model for the best part of the last 30 years. It models the software development as a definite set of steps, with all the products from one stage being passed on to the next level just as water cascades from one level to the next in a natural waterfall, with no flow back up. An improved improvement to the classical waterfall is the Stagewise Waterfall (Fig.2b. which allows for iterations of two consecutive phases (going back one step). In both of these models the testing activity is left towards the end of the lifecycle charged with the responsibility of finding the errors that might have been generated, inherited and compounded by previous erroneous actions. Testing the finished code mirrors the manufacturing activity of inspecting products at the end of the manufacturing line. This activity serves to identify the rejects and the items needing re-work without looking at what causes the defects.

-------------------------------------------------------------------page: 3

In real life, systems go through several iterations particularly due to the fact that the user requirements are not understood or specified fully at the beginning. Activities within a phase need not be undertaken in a strict sequential order and can overlap run in parallel or in an iterative fashion.. Parallelism and iteration are common within phases as well as between phases. Although one phase logically follows another, the cycle may be repeated many times during the life of a software system. Furthermore, the discovery of a problem in one phase often leads to backtracking to earlier phases in the cycle; this is supported by the waterfall stagewise model .

Requiremenents

Requiremenents

Architectural Design

Architectural Design

Detailed Design

Detailed Design

Code

Code

Testing

Testing Maintenance Review

Fig. 2aThe waterfall model

Maintenance Review

Fig. 2b The Stagewise Waterfall Model

2.2.2 Transformational and Iterative Models The transformational model (Fig. 3) shows that systems undergo a series of transformations of descriptions starting from high level descriptions and going to more detailed levels. This model includes maintenance and emphasises the need for validation and verification at every stage. The involvement of the user is an integral part of the prototyping paradigm. The cycle starts with the requirements gathering and goes through a number of refinements. The prototype is a mechanism for clarifying the requirements and making improvements. Fig. 4 is our adaptation of Martin Shepperd's [Shepperd 1995] model of the prototyping lifecycle. Each time a new version is built traditional functional testing techniques are applied. We enclosed the iterations in the dotted rectangle showing the 'real' system as the delivered version.

-------------------------------------------------------------------page: 4

Concepts

....... Requirements

Architecture

Detailed Design

Key Transformation

Description

Validation& Verification

Fig.3- Transormational Model (adapted from Software Engineering Reference Book, .1991)

build prototype

raw requirements informal reqts produce informal reqts

change request

evaluate prototype

prototype

build 'real' system

'real' system

system spec.

Fig.4 A prototyping lifecycle. (adapted from Shepperd)

Rapid Application Development (RAD) relies on experienced, small teams which rely on the active assistance of the recipient community. This is the reason for the emphasis placed on the socio-technical aspects of systems and the use of methodologies like Soft Systems Method or ETHICS. RAD (Fig. 5) makes extensive use of automated tools at all stages particularly data repositories and automatic test generation [Bell 1992].

-------------------------------------------------------------------page: 5

Identification

Evaluation Formulation and preparation Monitoring

Implication

Appraisal

Fig. 5 Rapid Application Development

RAD requires developers and customers who are committed to the rapid-fire activities necessary to complete a system in a much abbreviated time frame. If commitment is lacking from either constituency. RAD projects will fail. Large, scalable projects can be modularised so that each function is allocated to a different RAD team for development in parallel with subsequent integration of all components. If a system cannot be properly modularised, building the components necessary for RAD will be problematic. If high performance is an issue, the RAD approach may not work. RAD is not appropriate when specialised technical skills are high. 2.2.3 The Spiral Model The most widely acclaimed model is the Spiral Model [Boehm 1988] which encompasses the best features of the classic lifecycle and prototyping. In addition, it includes analysis of alternatives and identification and resolution of risks (Fig. 6). Cumulative Cost DETERMINE OBJECTIVES, ALTERNATIVES, CONSTRAINTS

EVALUATE ALTERNATIVE, IDENTIFY, RESOLVE RISKS

Progress through steps Risk Analysis

Risk Analysis

Risk Analysis R A Prototypes Plan, life Cycle Plan

Development Plan

Prototype

Concept of Operation

Prototype(2)

Software Prototypes

Software Product Design

Requirements Validation

Operational Prototype

Detailed Design

Unit Test

Design Validation and Verification

Integration and Test

Prototype(3)

Acceptance Test

Integration and Test

Implementation PLAN NEXT PHASES

DEVELOP, VERIFY NEXT LEVEL PRODUCT

Fig. 6. The Spiral Model

-------------------------------------------------------------------page: 6

Code

If the risk analysis shows severe problems the project can be altered or even totally abandoned. This model maximises the quality of the produced system. because we can see the first real attempts to identify the causes (process defects) rather than the symptoms (product defects). Testing is part of the prototyping sub-process. Boehm himself identified the following weaknesses of the spiral model lifecycle: • it does not yet match the world of contract software acquisition • places much premium on the ability of software engineers to do risk assessment • the spriral model's generic activities making up the life cycle model need to be further expanded upon so that all the participants in the process are similarly aware of it and its framework. More recent versions of the Spiral model such as the WinWin model have included aspects of negotiation for accommodating different interests and resolving conflicts [Pressman 2000]. Boehm's spiral model is still of great use for product development, especially when attached to the traditional life cycle management model. 2.2.4 The V-Model This V-Model (Fig. 7) was introduced in the 80s identifying testing activities at all stages of the lifecycle. The left side of the V depicts the requirements analysis, the architectural design and the module design whilst the right hand side of the V depicts progressive levels of integration and testing, the acceptance testing being the last stage. At the bottom of the V we see the Coding activity. All testing is carried out against the specifications at the appropriate level (denoted by the horizontal connections in the model. The process is now repeatable and defined with increased probability of finding errors very early on in the development [Burr 1996].

Maintenance

Requiremenents Acceptance Test

Valid System

Specification Architectural Design

System Integration

System Test

Verified System

Design Detailed Design

Software Integration

Module Design

Unit Test

Debugged Modules

CODE Fig. 7- The V-Model -------------------------------------------------------------------page: 7

2.2.5 The W-Model The W-Model (Fig. 8) is a variation of the V-Model showing the method by which an artefact is examined for correctness, consistency and completeness. These methods are inspections, audits, reviews and various types of testing. Again the process is repeatable and defined Requirements Specification

Requirements Inspection & Audit/Review

Acceptance

Design Inspection & Review

System Design Detail Design

Design Review

Release

Acceptance Testing

System Testing

Integration Integration Testing

Code

Code Review Component Testing

Fig. 9 - The W-Model

2.2.6 The X-Model The X-Model (Fig. 9) marks a significant departure from all previous lifecycles because it introduces explicit activities for process measurement and improvement according to the Deming/Shewhart process improvement loop: Plan, Do, Study, Act which operates between each of the functions on the legs of the W-Model [Burr 1996]. Plan

Development 'W' Model Act

Do Process Measurement and Improvement

Study

Fig. 9 - The X-Model

-------------------------------------------------------------------page: 8

2.2.7 RAD and OO Rapid Application Development (RAD) is a linear sequential software development process model that emphasises an extremely short development lifecycle. This 'speed' is achieved by using component-based construction which links closely to the main characteristic of object-orientation. With 4GL environments it is possible to create large reusable components enabling the involvement of users, and the extraction and specification of requirements speedily and effectively. Visual languages (themselves O-O implementations) lend themselves to RAD development. In all cases automated tools are used to facilitate construction of the software. It is estimated that less than 15% of code in any given application is original and unique, and that 75% of the functionality is common to more than one programs [Georgiadou 1995a]. Therefore, potentially there are great savings to be made by reusing available code. Programmers write similar code time and again, but it is never exactly the same, so if code is to be reused it needs to be modified first. Reusable software should be of high quality, so that the users of the software have confidence that it will work without error. In other words it must be correct and efficient. Reusable software should have standard interfaces so that users are able to use it in a manner that complies with standard operating conventions, and easily integrated into new and existing systems. Reusable software must be robust, able to control the way in which external components can interact with it, not only to work correctly with the correct interface for which it is guaranteed, but if it is misused the user must be informed in a way in which it cannot be ignored. Robustness can only be achieved through encapsulation. Separation of the interface from the implementation and hiding the implementation is an important requirement but also providing for the algorithm can be chosen separately from the implementation decision of the data type, so that the software developer using the component can generate a customised implementation. Documentation should be produced in a way in which developers understand what the software does, how it can be used, and how it may be modified if necessary. Object-Oriented Programming and Object-Oriented Design are important new influences in software engineering providing a holistic and generic approach to all phases of software development. In particular if an object-oriented programming language is to be used to implement a system, the analysis can feed directly into the design and programming phases of the software development cycle. 2.2.8 Software Development with Reuse An advantage of reusing code is the mere fact that the software in question has been used and tested therefore reliable. There should be a policy for determining responsibility for errors and a mechanism for resolving problems, as well as a agreement on how -------------------------------------------------------------------page: 9

customisation affects warranty. To overcome resistance reusing software should be rewarded. Reuse and design are facets of the same activity. When commencing a new project, available reusable resources in terms of libraries of object classes could be identified, and then reused either directly or by means of inheritance. The development should be easier when tested stable components are already available. Several object-oriented lifecycles have been proposed most notably the Fountain and the Fractal models. We present an outline of these two models followed by an experience report from research and practice at the University of North London using ObjectOriented development tools. Humphrey [Humphrey 2000] observes that "The principal issues in designing for reuse are to define standard interfaces and calling conventions, to establish documentation standards, to produce high quality products, and to provide application support. 2.2.9 The fractal software development lifecycle Based on Foote's fractal model McGregor and Sykes [McGregor 1992] have proposed the model shown in Fig. 11 attempting to show that the way classes develop is similar to a fractal using recursion. This also fits with their concept of modelling the problem domain, realising the classes and refining the resulting products. Further refinement of the model is necessary to identify the position and nature of the testing activity.

Model e

R n

e i

a f

l e

i R

z e

Fig. 11 The fractal model

They also proposed an O-O lifecycle coupled with a class lifecycle shown in Figs 12a and 12b. They identified three different routes for class development namely development from scratch, evolution from existing class and reusing existing class. Quality must be built into the system from the very first steps of development. Quality is not a feature that can be added to software after it is created. It must be planned for, checked for and worked on at every single step in the development process. Investing -------------------------------------------------------------------page: 10

more effort in early testing and validating of specifications, analysis and design models ensures that the resulting software artefacts will gain the benefits of reduced maintenance costs, higher software reliability and more user-responsive software. Client Input

Class Specification Informal Domain

System Description

Application

Analsysis

Analysis

Develop from Scratch

Reuse

Evolution from existing

of existing

classes

classes

High Level Design

Class Development

Implementation

Incremental Implementation

Instance Creation

Integration Test

Development of Test Cases

Incremental Testing

& Testing Maintenance

Fig 12a O-O lifecycle

Refinement and Maintenance

Fig.12b. The class lifecycle

The experience of testing procedural systems is useful but not adequate for testing Object-Oriented systems. Class testing includes two major forms: specification-based and program-based. Specification-based treats the class as a black box. This is intended to determine whether the class is performing according to its specification. Programbased testing considers the implementation of a class. Due to the nature of Object-Oriented development adequate measures are necessary to ensure that classes are tested for correctness and consistency so that they can take their place in the library. Testing is a central activity in Object-Oriented development. Barbey, S. and Strohmeier [Barbey 1994] discuss the problems involved in testing Object-Oriented systems and the need for testing despite the strong belief that this method of development somehow eliminates or reduces this need. They discuss the three main areas of encapsulation, inheritance and polymorphism which form the very nature of O-O. They argue that encapsulation generates the problem of observability, inheritance opens the issue of retesting and polymorphism introduces indecidability. "Moreover, erroneous casting (type conversions) are also prone to happen in polymorphic contexts and can lead to non-easily detectable errors."

-------------------------------------------------------------------page: 11

The differences in the architecture of O-O systems requires the development of a new testing methodology in order to evolve to truly component based development of software. Developing for reuse requires that components' (classes) development has a lifecycle independent of the application for which the components were originally produced. Components must be sufficiently general to be reusable by future projects. Exhaustive testing of a newly developed class is plausible if the interface is narrow and well defined. The testing of classes should begin at the top, abstract level and proceed down an inheritance relationship. New classes are thus more easily tested as the tested pieces do not need re-testing. Incremental testing effort is reduced as many existing test cases can be reused. Some portions of the new class will not need to be tested having been tested in the existing class. 2.2.10 Xtreme Programming puts testing at the beginning of the lifecycle According to Beck [Beck 2001] traditional life cycle models are inadequate and should be replaced by incremental design and rapid prototyping, using one model from concept to code through analysis, design and implementation. In other words design starts while still analysing, and coding starts soon after starting the detailed design. Portions of the design are tested along with implementation XP is successful because it stresses customer satisfaction. The methodology is designed to deliver the software your customer needs when it is needed. XP empowers developers to confidently respond to changing customer requirements, even late in the life cycle [Beck 2001], [Holcombe 2001]. XP is one of several new methodologies known as agile or lightweight to denote the break-away from too many rules and practices. The XP community believe that design information should allow the design to be captured in a form that is machine processable, and the components to be reused smoothly integrated. Software development tools should control the management of this development process enabling application development top down through to code generation, and bottom up to include and provide for re-engineering of existing code, fully integrated with the development environment.

3. Methodical Development of Information Systems Considerable advances have also been achieved by the introduction of methods and tools for the systematic planning and control of the development process. Systems Development methodologies have been proposed and used to address a number of problems. Typically these were: ambiguous user requirements, unambitious systems design, unmet deadlines, budgets exceeded, poor quality software with numerous 'bugs' and poor or non-existent documentation. This meant that the software was difficult to maintain, and inflexible to future changes.

-------------------------------------------------------------------page: 12

By applying a methodology to the development of software insights are gained into the problems under consideration and thus, they can be addressed more systematically. Software should comply with the important quality requirements of timeliness, relevance, accuracy and cost effectiveness. Software Engineering aims to bring to bear the more rigorous methods used in the engineering world in the software development world. Methodologies provide the environment for repeatable procedures with specified deliverables at each stage of the system lifecycle. Over 2000 methodologies (and brand names) are in existence todate [Avison 1985, Jayaratna 1994, Georgiadou 1995c] each claiming to solve most, if not all, of the problems of systems development. 3.1 Information Systems Development Methodologies The need for methods was recognised in the late 60s at the same time as the emergence of the term Software Engineering. This came as a result of the whole of human activity (transport, manufacturing, service industry, social services such as health, education, entertainment) being underpinned by the use of computers and computer systems which tended to be late, over-budget and unreliable. The development of methods expressed a constant search for solutions to these problems particularly cost saving and product quality improvement. The majority of methods concentrated on technical aspects ignoring the organisational structure and culture and often ignoring the user. Purely technology-oriented approaches have not always been effective. The whole family of Structured Methods, Information Engineering, JSD and Formal Methods tend to concentrate mainly on technical issues often ignoring 'people problems'. The Soft Systems Method [Checkland 1990] established the principle of the owner's viewpoint emphasising the importance of the human involvement. The ETHICS methodology [Mumford 1979] is based on a socio-technical approach recognising the importance of knowledge and psychological fit from the developers and most importantly from the user's point of view. Neither of these methodologies covered the whole of the development process needing to 'borrow' techniques mainly from the structured methods for the implementation of solutions. Fig.13 represents a historical view of the introduction of methodologies since the mid-sixties. ad-hoc

65

NCC 70

Struc. Methods 75 Z

JSD

80 SSM

Multiview 85

OMT 90

XP

DSDM 95

Coad-Yourdon

00 CBD

ETHICS

Fig. 13:- Information Systems Development Methodologies over 30 years

The development of methods has been taking place in parallel with the development of automated tools particularly CASE and I-CASE providing the environment for correctness, consistency and completeness of systems through the use of central repositories, data dictionaries and encyclopaedias. Adherence to notational standards and specified procedures enabled the improved allocation of specialised, inexperienced and -------------------------------------------------------------------page: 13

experienced staff to appropriate tasks and facilitated team work. Many of these developments reflected similar developments in the manufacturing industry although in systems development the focus was primarily placed on product improvement rather than process improvement which came as a by-product. 3.2 Formal Methods The most important characteristic of a quality software system is that of reliability [Holcombe 1998, Berki 2001]. Systems with high reliability are complete, consistently available and display correct behaviour. A bug-ridden system will not be reliable. An unreliable system will not be functional according to its requirements and thus will not be effective in increasing productivity, and so on. The objectives of all specification methods are lack of ambiguity, consistency completeness and correctness. What sets formal methods apart is the higher likelihood of achieving these ideals. To be consistent facts stated in one place in a specification should not be contradicted in another place. Consistency is ensured by mathematically proving that initial facts can be formally mapped onto later statements within the specification. A correct, complete set of requirements is one that correctly and completely states the desires and needs of the user. Completeness is difficult to achieve even when formal methods are used. Some aspects of the system may be left undefined. They may be omitted by mistake or be left out in order to allow freedom to the developer. Correctness has many facets and it is certainly related to validation and verification [Holcombe 1999, Berki 2001]. Although formal methods allow us to reason logically about system development and they fully demonstrate the adequacy of the programming products they are not used extensively because not enough programmers are skilled in their use. In terms of academic and industrial training formal methods are insufficiently understood because they require an appreciation of abstract concepts and great attention to detail. These skills take a considerable time to assimilate.

3.5 Hybrid Methods and Contingency Approaches Hybrid methods range from a selection of techniques to an amalgam of techniques as in the case of Multiview which is a contingency approach drawing on aspects covered by methodologies as far apart as soft systems and JSD [Wood-Harper 1985]. The Soft Systems Analysis of human activities and the participative and socio-technical views have been combined with the more conventional work on data analysis and structured analysis so as to create a theoretical framework for tackling computer systems design which attempts to take account of the different points of view of all the people involved in using the system.

-------------------------------------------------------------------page: 14

Some Software Engineers argue that hybrid methods leads back to the days when DIY analysts operated before information systems development methodologies were adopted, producing idiosyncratic and unmaintainable systems of variable value. The choice of which tool or technique is appropriate is a very skilled job and these skills are few and far between [Avison 1985].

A Taxonomy of Methodologies Avison and Fitzerald’s [Avison 1985] classification was later depicted in a hierarchical taxonomy (Fig. 14 ) [Georgiadou 1995] which was used to underpin further investigation into requirements engineering, quality models and compromises of the various viewpoints [Siakas 1997], [Berki 1997], [Berki 1998]. Methodologies

Specialised Soft

SSM

ETHICS

Hybrid

Hard

XMachines xTreme P Strucured

Formal

O-O

OMT SSADM

YOURDON

IE

....

STRADIS

MERISE

Z

..... Booch

VDM VDM

Fig. 14 An ISDM Tree

3.7 Choosing the Most Appropriate Development Methodology The choice of an appropriate method has been of concern to industry and academia alike Making the wrong choice of method or tool can be very costly for a company. How are software engineers to succeed in making these choices ? It has been suggested that Feature Analysis can be used for identifying factors of interest and for prioritising them. David Law in his 'Methods for comparing methods' [Law 1988], and more recently 'DESMET 1 qualitative evaluation procedure using features analysis' [Law 1992] explored the issues of evaluation. Feature Analysis was proposed as an effective qualitative evaluation method which is mainly useful for deriving a list of candidate methodologies and/or tools out from the immense availability list.

1

DESMET ( Determining an Evaluation System for Methods and Tools) is a DTI/SERC funded project with the following partners: NCC, GEC Marconi Systems, BNR and the University of North London

-------------------------------------------------------------------page: 15

Once the features are identified they can be prioritised by allocating ratings and by suitable scoring. Table 1 shows such a list along with the level of importance and desirablity of each one. Table 1. -Features List with Importance Levels

Feature Cost Usability Productivity Efficiency Project management Compatibility/Portability Maintainability Documentation

Importance M M HD HD HD D D D

Key: M mandatory, HD highly desirable, D desirable, N nice

It can be seen that features are allocated an importance rating. In addition scoring systems suggested by DESMET may use a simple grading of 1 (low) to 4 (high) with tolerance levels for acceptability termed as deviations from the desirable [Law 1992]. However, both the selection and the scoring can be very subjective so quantitative evaluation approaches have been suggested which introduce added rigour and credibility. "Evaluation studies, surveyed during the course of developing DESMET, revealed that the two main evaluation approaches, the qualitative and the quantitative, have been pursued in parallel and at a distance from each other "[Mohamed 1992]. A combination of the DESMET qualitative and quantitative methods for evaluating methodologies and tools were applied throughout this research, and the results can be found in [Milankovic-Atkinson 1995], [Georgiadou 1995], [Berki 1998], [Georgiadou 1998], [Georgiadou 2001].

4. Quality and Process Models Quality Models attempted to relate qualitatively and/or quantitatively software attributes for the purpose of estimating or assessing overall quality but most frequently one specified attribute of interest such as productivity, maintainability, usability and so on. 4.1 Top Down Process Improvement Approaches The Total Quality Management (TQM) movement was based on the fundamental principle of empowerment of all involved, fostering a questioning attitude and encouraging the active exchange of ideas and criticism.

-------------------------------------------------------------------page: 16

Fig. 15 encapsulates the interactions and influences of values, beliefs and experiences on decisions and actions. TQM relies on continuous feedback for improvement. Values & Attitudes

Decisions

Objectives

Actions

Review

Experience

Fig. 15 Continuous Improvement through TQM

Other process assessment and/or improvement reported in the literature include the Goal Quastion Metric (GQM) [Basili 1994], BOOTSTRAP [Kuvaja 1994], CMM [Paulk 1993], SPICE (ISO/IEC TR 15504:1998 ) [http://www.sqi.gu.edu.au/spice/ ] and the ISO suite of standards. These models provide a high level view of what ought to be the process of a software development organization. Such models are based on the consensus of a designated working group about how software should be developed (planned, controlled and managed) and how it should be maintained (changed, improved and supported). 4.2 The Capability Maturity Model The Capability Maturity Model (CMM) has been the most successful model so far for describing the software process, practices, attitudes of a software development organisation. CMM was developed at the Software Engineering Institute of Carnegie Mellon University, USA . Below is a brief description of the CMM 5 point scale namely Initial, Repeatable, Defined, Managed and Optimising [Zahran 1998], [Humphrey 1997]: Initial: This level is characterised as ad hoc. Typically, the organisation operates without formal procedures, cost estimates or project plans. There are few mechanisms to ensure that procedures are followed. Tools, if they exist, are not well integrated. Change control is lax or non-existent. Senior management neither hears about nor understand software problems and issues. Success generally depends on the efforts of individuals, not the organisation. Repeatable: At the repeatable level, project controls have been established over quality assurance, change control, cost, and schedule. This discipline enables earlier successes to be repeated, though the organisation may have problems applying these techniques to new applications.

-------------------------------------------------------------------page: 17

Defined: The defined software process, for both management and engineering activities, is documented, standardised, and integrated across the organisation. The process is visible; that is, it can be examined and improvements suggested. Typically, the organisation has established a software engineering process group to lead the improvement effort, keep management informed on progress, and facilitate introducing other software engineering methods. Managed: Achieving the fourth, or managed, level requires that measures of software process and product quality be collected so that process effectiveness can be determined quantitatively. A process database and adequate resources are needed to continually plan, implement, and track process improvements. Optimising: At the optimising level, quantitative feedback data from the process allows continuous process improvement. Data gathering has been partially automated. Management has changed its emphasis from product maintenance to pr ocess analysis and improvement. Defect cause analysis and defect prevention are the most important activities added at this level. There are two major ways the maturity model can be applied: for software process assessment (SPA) and for software ware capability evaluation (SCE).A software process assessment is a means for organisations to identify their strengths, weaknesses, existing improvement activities, and key areas for improvement. A software capability evaluation is an independent evaluation of an organisation's software process as it relates to a particular acquisition. It is a tool that helps an external group (an acquirer) determine the organisation's a particular product having high quality and to produce it on time and within budget. 4.3 The Maturity of Lifecycle Models In 1995 the author studied different lifecycle models [Georgiadou 1995] with particular emphasis on the position of testing in the whole of the lifecycle. It was observed that more mature lifecycles, and hence more mature software engineering processes, testing and other quality assurance techniques moved to the early part of the lifecycle. A juxtaposition of a historical introduction of the models to the CMM scale [Fig 16] aimed to show that over the last 30 years as we moved from the waterfall model to incremental models the maturity of the underlying process was increasing. It must be emphasised that these observations were not tested through formal measurements. The original juxtaposition in [Georgiadou 1995] has now been modified to include CBD and XP (Extreme Programming). The latter has placed emphasis on starting the whole process with 'Stories' and test data. Thus testing has now moved to the beginning of the lifecycle, which is a reversal of the monolithic linearity of the waterfall model. It remains to be seen how successful this new 'movement' will be. If indeed it proves to bring improvements to software quality, and, at the same time, to enhance the quality of the process through the empowerment of the developers and the customers, our initial

-------------------------------------------------------------------page: 18

assertion will be strengthened. In accordance to our initial assertion the maturity of the XP process is the highest of all others.

V-model Waterfall 1970

1975

1980

1985

Waterfall

NCC

X-model

Prototyping W-model OO-models ?? 1990

1995

2000

RAD

with Feedback

Client-server

CMM 1 ad hoc

2 repeatable

3 defined

4 managed

5 optimised

Process Assessment Scale

Fig. 16 Maturity of Lifecycle Models

5. Software Quality Metrics "To measure is to know. If you can not measure it, you can not improve it."(William Thomson (later Lord Kelvin) (1824 - 1907).

According to Kitchenham [Kitchenham 1996] “Software Metrics can deliver :support for process improvement, better project and quality control and improved software estimation”. Direct measurement of quality factors is often possible very late in the life cycle. For example reliability, which is concerned with how well as software system functions and meets a user’s requirements can be measured after that the software has been used for a stated period of time under stated conditions, while indirect measurement of quality, like number of discrepancy reports (deviations from requirements) can be obtained earlier in the life cycle. Other estimates of quality can be made by developers even earlier than the indirect measurements of quality.. Managing the development process requires the collection of suitable metrics which will provide insights into the strengths and weaknesses of the process. What to measure, how to measure, when to measure are the fundamental questions which we need to focus on in an effort to gain insights for further improvement. According to ISO-9126 software quality may be evaluated by six characteristics, namely: 1. Functionality 2. Reliability 3. Efficiency

4. Usability . 5. Maintainability 6. Portability

-------------------------------------------------------------------page: 19

A software metric is a measurable property which is an indicator of one or more of the quality criteria that we want to measure. Metrics must measure the correct attributes in order for them to be useful. 5.1 Software Measurement Measurement is defined as the process of assigning symbols, usually numbers, to represent an attribute of the entity of interest, by rule [Fenton 1991, 1994, 1995], [Shepperd 1995]. Entities of interest include objects, (e.g. code, specification, person) or processes (e.g. analysis, error identification, testing). Distinct attributes might be length of code, duration, costs. Representation is usually in numbers (or other mathematical objects e.g. vectors). Finally in order to provide objectivity we need to assign numbers (symbols) according to explicit rules: how to choose which symbol should represent the attribute. Such rules ensure that the assignment is not random. Fenton and Pfleeger [Fenton 1997] provide a refined definition of measurement: “Measurement is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to characterise them according to clearly defined rules. The numeral assignment is called the measure.” This definition provides a rigorous basis for determining when a proposed measure characterises an attribute and provides rules for determining what statistical analysis are relevant and meaningful. Hence, in order to understand the definition of measurement in the software context, we need to identify the relevant entities and attributes which we are interested in characterising numerically. 5.2 Software metrics over 30 years The first metric to be proposed was the number of lines of code (LOC) in a program. Although used by several estimation models such as COCOMO [Boehm 1981] it is no longer considered useful [Beizer 1990, Fenton 1991]. In the late 70s Maurice Halstead defined a number of linguistic metrics in [Halstead 1977] based on two program attributes namely the number of distinct operators in the program and the number of distinct operands in the program In principle these attributes (and hence the program length) can be estimated before any code is actually written. The Halstead length is a measure of the complexity of a program. As such it can be used to predict defect rates. Halstead derived the following formula for the total number of defects in a program Halstead also derived predictions for the total effort required to complete a program which we won't go into here. Since Halstead proposed these measures there has been extensive confirmation of their predictions although there have been several criticisms. Much of the research on control flow measurement reflects the focus of the software engineering community on structured programming during the late 1970’s and throughout the 80s.

-------------------------------------------------------------------page: 20

Subsequently numerous static and dynamic metrics were proposed for both procedural and object-oriented code. Notable contributions came from [Boehm 1981], [Henry 1981] and [Chidamer 1991, 1994]. Design complexity deals with the overall morphology of the system. The amount of Fan-IN Fan-Out, the level of coupling and the cohesion are measures of morphology. Fenton [Fenton 1991] uses the concept of width of call and depth of call. 5.3 Using Software Complexity Measures If that's what they said is Gordium, I say That not everything which is difficult is useful And an answer less often suffices to rid the world of a Question Than a deed. (from The Impact of the Cities 1925-1928 TheGordian Knot by Bertolt Brecht)

In December 1976 Tom McCabe published his paper Software Complexity Measure [McCabe, 1976]. It was to become one of the most quoted papers and the complexity metric the most used metric for a quarter of a century. It was the simplicity of the concept that made the metric so appealing. Software engineers could at least calculate the number of independent paths through a given program and hence they could generate test data to 'fully' exercise the various branches. " His fundamental assumption was that software complexity is intimately related to the number of control paths generated by the code. In other words, he tackled part of the first reason a programmer might find code complex. McCabe developed a method that maps a program to a directed, connected graph. The nodes of the graph represent decision or control statements; the directions indicate control paths that dictate the program flow. The enclosed regions in the graph represent code chunks that execute according to the decision or control statements. McCabe stated that the complexity of a program equals the number of enclosed regions in its mapped graph plus one. He called this number the cyclomatic complexity of the program. McCabe computed the cyclomatic complexity for a group of programs. By comparing the cyclomatic complexity with the frequency of errors, he deduced that a program with cyclomatic complexity exceeding 10 is error-prone, so he stated that programs should not have cyclomatic complexity exceeding 10. McCabe and others found a high correlation between programs with high failure rates and high cyclomatic complexity. Studies that attempt to correlate error rates with computed complexity measures show mixed results. Some studies show that experienced programmers provide the best prediction of error rates and software complexity. Measuring software complexity becomes particularly important in object-oriented design and programming, where complexity manifests itself in new ways. Metrics can provide -------------------------------------------------------------------page: 21

the feedback we need during the design process to avoid unnecessary complexity, and its inevitable consequence, defective software. Traditionally, metrics have had two purposes: the prediction of defects, and the prediction of effort. Both types of prediction are based on the simple notion that the more complex a piece of software is, the more likely it is to contain defects, and the longer it will take to build. Since programs are not (usually) subjected to the sort of formal analysis that would allow us to prove that they are defect-free, a metric allows us to predict defects on the basis of data collected on previous projects which shows correlations between the metric and defect rates. Similarly, correlations between the metric and the effort required to develop software can allow us to estimate how much effort will be required on the next project. A useful effort-predicting metric must be available fairly early on in the development life cycle, while a defect-predicting metric can be used at all stages. Boris Beizer's observed that it is important to be aware of the purpose of a metric; confusing an effort-predicting metric with a defect-predicting metric can nullify the metric's usefulness [Beizer 1990]. 5.4 Morphology and good design Brooks [Brooks 1995] states that a good top-down design avoiding bugs can be achieved in the four ways listed below. 1. By presenting the structure of the design in a clear manner so that it is easier to understand the precise statement of requirements and functions of the modules. 2. By partitioning modules so that they are independent of each other to avoid system bugs. 3. By suppressing details so that flows in the structure are more apparent. 4. By representing information at the correct level of detail so that testing is made easier. Why might we be especially interested in measurements for early life-cycle products? Because we would like to predict attributes of the eventual implemented system such as cost, effort, size, complexity and quality. Complexity means the totality of all internal attributes and we aim to control it in software products. We are interested in both internal and external attributes [Fenton 1991] because for example the reliability of a program is dependent not just on the program itself, but on the compiler, machine and user. And productivity is dependent on people and management of a project. It is often necessary to use surrogate measures for complex external attributes [Fenton 1996], [Kitchenham, 1996]. For example time taken to carry out specified maintenance tasks might be used to provide an indication of the maintainability of software [Georgiadou 1994].

6. Conclusions and Future Work Software artefacts even 'small programs' are among the most complex artefacts that humans produce, and software development projects are among our most complex -------------------------------------------------------------------page: 22

undertakings. Nowadays our lives are governed by computers, communications and computer-based systems. Failures of such systems lead to significant economic losses or even the loss of human lives. The are many different causes of failures in computer-based systems including physical faults, maintenance errors, design and implementation, mistakes resulting in hardware or software defects, and user or operator mistakes. These causes - faults - are all undesired circumstances that hinder the system from delivering the expected service. . The software engineering community has come to appreciate the importance of investing more effort in early testing and validating of specifications and analysis and design models in order to ensure that the resulting software artefacts will gain the benefits of reduced maintenance costs, higher software reliability and more user-responsive software. Software Engineering is developing a sound basis for requirements specification, analysis and design of information systems. Emphasis is placed lately on the formal aspects of systems design for accuracy, consistency, correctness and completeness. Computer science, information systems and software engineering have witnessed a large number of lifecycle models, methodologies and metrics all aiming to address problems of quality of both software products and software process. Unfortunately, few of these have been subjected to careful experimentation and are often adopted with limited empirical evidence of their correctness or effectiveness. Further investigations by the author aim to derive software metrics linking appropriate internal and external attributes. Such links (relationships/functions) include the way in which complexity impacts on the understandability, usability and maintainability of a system. In particular we have initially concentrated on analysing and understanding legacy code with the view of restructuring it in order to improve its understandability and maintainability. Although many of these relationships are intuitively understood we are producing rigorous measures [Georgiadou 1993, 1994, 1998, 2001] which ultimately will form a metrics framework for process and product improvement.

References [Avison 1995] Avison D.E., Fitzgerald, G. Information Systems Development: Methodologies, Techniques and Tools, McGraw-Hill, 1995 [Barbey 1994] Barbey S. Strohmeier, A The Problematics of Testing Object-Oriented Software, SQM'94 Second Conference on Software Quality Management, Edinburgh, Scotland, UK, July 26-28 1994, M. Ross and C. A. Brebbia and G. Staples and J. Stapleton (Ed.), vol. 2, 1994, pp. 411-426. [Barbor 2001] Barbor, N. & Georgiadou, E. "Investigating the applicability of the Taguchi Method to Software Development", Quality Week 10th Annual International Conference, San Francisco, USA [Bell 1992] Bell S. Wood-Harper, T. "Rapid Information Systems Development", McGraw-Hill [Beck 2001] Beck, K. Extreme Programming Explained , Software Quality Week, San Francisco, May 2001 [Berki 1997] Berki, E., Georgiadou, E., Siakas, K.: "A methodology is as strong as the user

-------------------------------------------------------------------page: 23

involvement it supports", International Symposium of Software Engineering in Universities (ISSEU ‘97) Finland, March 1997 [Berki 1998] Berki, E. , Georgiadou, E. “A comparison of qualitative frameworks for information systems development methodologies”, In Proceedings of The Twelfth International Conference of The Israel Society for Quality (Jerusalem, ISRAEL [Berki 2001] Berki, E. "Establishing a Scientific Discipline for Capturing the Entropy of Systems Process Models: CDM-FILTERS A Computational and Dynamic Metamodel as a Flexible and Integrated Language for the Testing, Expression and Re-engineering of Systems", PhD Thesis, University of North London, October 2001 [Boehm 1981] Boehm, B. "Software Engineering Economics, Englewood Cliffs, New Jersey, Prentice Hall [Boehm 1988] Boehm, B. "A Spiral Model for Software Development and Enhancement", Computer, Vol.21. no. 5, May 1988 [Brecht 1925] Brecht, Bertolt, Plays, Poetry and Prose, (edited by John Willett and Ralph Manheim), Eyre Methuen, London, 1976 [Brooks 1987] Brooks, F.P.: "No silver bullet: Essence and Accidents of Software Engineering", Computer, April 1987 [Brooks 1995] Brooks, F.P., ‘The Mythical Man-Month: Essays on software engineering, 2 nd edn’, Addison-Wesley [Burr 1995] Burr A., Georgiadou E., "Software development maturity - a comparison with other industries", 5th. World Congress on Total Quality, India, New Delhi, Feb. 1995 [Burr 1996] Burr, A. and Owen, M., ‘Statistical Methods for Software Quality’,International Thomson Publishing Inc [Cant 1994] Cant, S.N., Henderson_Sellers,B. and Jeffrey, D.R. Application of cognitive complexity metrics to object-oriented programs, Journal of Object-Oriented Programming, July/August 1994, pp 52 -63 [Cartwright 1998] Cartwright, M. Shepperd, M. J. "An Empirical View of Inheritance", Proc. EASE 98 Empirical Assessment and Evaluation in Software Engineering, Keele, UK, [Checkland 1990] Checkland, P. and Scholes, J.: "Soft Systems Methodology in Action", Wiley, 1990 [Chidamer 1991] Shiyam R. Chidamber and Chris F. Kemerer. Towards a Metrics Suite for Object Oriented Design. Proceedings of OOPSLA'91, in ACM SIGPLAN Notices, vol. 26, no. 11, p. 197 (Nov. 1991). [Chidamer 1994] Chidamber, S. R., Kemerer, C. F. A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, Vol.20, No 6., June 1994, pp 476 -491. [Choi 1990] Choi, S.C. and Scacchi W. "Extracting and Restructuring the Design of Large Systems", 11 Software [Churcher 1995] Churcher, N. Shepperd, M.J. 'Comments on "An object oriented metrics suite", IEEE Trans. on Softw. Eng. 21(3) [Churcher 1995]] Churcher, N. I., Shepperd, M.J. Comments on "A Metrics Suite for ObjectOriented Design, IEEE Transactions on Software Engineering, Vol.21, NO.3, March 1995. [Deming 1989] Deming, E. Out of Crisis, MIT Center for Advanced Engineering Study, Cambridge, Mass. [Fenton 1991] Fenton N. "Software Metrics - A rigorous approach", Chapman & Hall [Fenton 1994] Fenton N. E. Software measurement: a necessary scientific basis. IEEE Trans. on Software Eng., SE-20, p. 199-206. [Fenton 1994] Fenton, N., S. Pfleeger, & R. L. Glass Science and Substance: A Challenge to Software Engineers. IEEE Software. pp. 86-95. [Fenton 1995a] Fenton, N., Whitty, R., Iizuka Y. [ed], Software Quality Assurance and Measurement, A Worldwide Perspective International Thomson Computer press, London, 1995: [Fenton 1996] Fenton, N. The Empirical Basis for Software Engineering. In A. Melton (ed.), Software Measurement. London: International Thomson Computer Press. p. 197-217. [Fenton 1997] Fenton, N.E. and Pfleeger, S.L., ‘Rigorous & Practical Approach’, PWS Publishing Company [Georgiadou 1995a] Milankovic-Atkinson, M.Georgiadou E, Sadler C., "RETRO-Reusability, -------------------------------------------------------------------page: 24

Engineering, Testing, Restructuring and Objects", 4th. Software, Quality Conference '95, Dundee, Scotland [Georgiadou 1995b] Georgiadou E , Milankovic-Atkinson M., "Testing and information systems development lifecycles", 3rd. European Conference on Software Testing Analysis and Review (EuroSTAR'95), London, UK [Georgiadou 1995c] Georgiadou E, Sadler C., "Software Quality: Myths, Methods and Metrics", 5th World Congress on Total Quality, New Delhi, India [Georgiadou 1995d] Georgiadou E., Sadler C., "Achieving quality improvement through understanding and evaluating Information Systems Development Methodologies", 3rd. International Conference on Software Quality Management, SQM'95, Seville, Spain [Georgiadou 1996] Georgiadou, E., Berki. E. "Improving Systems Specification Understandability by using a Hybrid Approach", INSPIRE'96 International Conference, Bilbao, Spain [Georgiadou 1998] Georgiadou, E. & Hy, T. & Berki, E.:" Automated qualitative and quantitative evaluation of software methods and tools " Proceedings of The Twelfth International Conference of The Israel Society for Quality, Jerusalem, ISRAEL [Georgiadou 1999] Georgiadou, E., Milankovic-Atkinson, M. “A formal experiment to verify ObjectOriented Metrics”, INSPIRE’99, Crete, Greece [Georgiadou 2000] Georgiadou, E., " Software process quality assessment for small to medium sized enterprises", SBED, Manchester, UK [Halstead 1977] Halstead. M. Elements of Software Science. Elsevier North-Holland, 1977. [Henderson-Sellers 1994] Henderson-Sellers, B. Identifying Internal and External Characteristics of Classes likely to be useful as Structured Complexity Metrics, OO Conference, South Bank University, December 1994. [Henry 1981] Henry, S.M. , Kafura, D. G., Software Structure Metrics Based on Information Flow, IEEE Transactions on Software Engineering Vol. 7 (5) 1981 (pp 510-518) [Holcombe 1988] Holcombe, M. " X-Machines as a Basis for Dynamic System Specification " Software Engineering Journal, March 1988. [Holcombe 1993] Holcombe, M.: An integrated Methodology for the Specification, Verification and Testing of Systems Software Testing: Verification & Reliability Sep-Dec 1993 [Holcombe 1996] Holcombe, M: Correct Systems - Can we build them and does it matter? Annual ACM Lecture, The Centenary Symposium, School of Computing, University of North London, April 1996 [Holcombe 1998] Holcombe, M. and Ipate, F.: "Correct Systems - Building a Business Process Solution", Springer-Verlag, 1998 [IEEE 1990] IEEE,1990: IEEE Glossary of Software Engineering Terminology, 610.12 [ISO, 1994], ISO9001 "Quality Systems - Model for Quality Assurance in Design/Development, production, installation and servicing", International Organisation for Standardisation [Humphrey 1995] Humphrey, W.S. "A Discipline for Software Engineering", Reading, MA: Addison-Wesley [Humphrey 2000] Humphrey, W.S. "Introduction to the Team Software Process SM" Reading, MA: Addison-Wesley [Jackson 1983] Jackson, M. "Systems Development", Prentice Hall, 1983 [Jackson 1993] Jackson, M. and Zave, P.: "Domain descriptions", in Proceedings of the 1st International Symposium on Requirements Engineering, San Diego, Ca, pp. 56-64, 1993 [Jackson 1994] Jackson M. " Problems, Methods and Specialisation", Software Engineering Journal [Jayaratna 1994] Jayaratna, N.: "Understanding and Evaluating Methodologies", NIMSAD: A Systemic Approach, McGraw-Hill [Jones 1991] Jones, C., ‘Applied Software Measurement’, McGraw-Hill [Johnson 1988] Johnson, R.E., Foote, B. Designing Reusable Classes, Journal of Object-Oriented Programming, June/July 1988, pp 22 - 35 [Juran, 1999] Joseph M. Juran and A. Blanton Godfrey Juran's Quality Handbook , McGraw-Hill [Kitchenham 1989] Kitchenham B.A. and Walker, J.G., 'A quantitative Approach to Monitoring Software Development', Software Engineering Journal, January, 1989. [Kitchenham 1989] Kitchenham B.A. and Walker, J.G., 'A quantitative Approach to Monitoring -------------------------------------------------------------------page: 25

Software Development', Software Engineering Journal, January, 1989. [Kitchenham 1992] Kitchenham, B.A, DESMET Handbook of Data Collection and Metrication, Book 1: Software Measurement Goals, NCC Internal Project Report, 1992. [Kitchenham 1992] Kitchenham, B.A, DESMET Handbook of Data Collection and Metrication, Book 1: Software Measurement Goals, NCC Internal Project Report, 1992. [Kitchenham 1992] Kitchenham, B.A, DESMET Handbook of Data Collection and Metrication, Book 1: Software Measurement Goals, internal project report, 1992. [Kitchenham 1994] Kitchenham, B.A., Linkman, S.G. and Law, D.T.: "Critical review of quantitative assessment", Software Engineering Journal, Mar. 1994 [Kitchenham 1995] Kitchenham, B., L. Pickard, & S. Pfleeger (July 1995) Case Studies for Method and Tool Evaluation. IEEE Software. pp. 52-62. [Kitchenham 1996] Kitchenham, B.Software Metrics, - Measurement for Software Process Improvement, NCC, Blackwell, 1996 [Kolewe] Kolewe, R. Metrics in Object-Oriented Design and Programming, Software Development 1993, pp 53-62. [Kuvaja 1994] Kuvaja, P. et al Software Process Assessment and improvement: The BOOTSTRAP approach, Oxford, Blackwell Publishers [Law 1988] Law D. " Methods for Comparing Methods: Techniques in Software Development", NCC Publications, 1988 [Law 1992] Law, D., and Naeem, T., `DESMET: Determining and Evaluation methodology for Software MEthods and Tools', Proceedings of BCS Conference on CASE - Current Practice, Future Prospects, Cambridge, England, March 1992. [Lewis 1991] Lewis T.G. "CASE: Computer-Aided Software Engineering", Van Nostrand Reinhold, 1991 [Lewis 1995] Lewis, J.A., Quantified Object-Oriented Development: Conflict and Resolution, 4th Software Quality Conference, Dundee `1995 Proceedings Volume 1 pp 220 -229 [Li 1993] Li, W., Henry, S. Object-Oriented Metrics that Predict Maintainability, Journal of Systems Software, 1993: 23: 111-122 [Logothetis 1989] Logothetis, N. andWynn, H.P., ‘Quality Through Design: Experimental Design, ‘Off-line Quality Control and Taguchi’s Contributions’,Oxford Science Publications [Lorenz 1994] Lorenz, M., Kidd, J. Object-Oriented Software Metrics, Prentice-Hall, 1994 [MacDonell 1997] MacDonell, S.G., M.J. Shepperd, and P.J. Sallis. 'Metrics for Database Systems: An Empirical Study', in Proc. 4th IEEE Intl. Metrics Symp. Alberqueque: 1997. [Matsumoto 1989] Matsumoto Y.,'An OOverview of Japanese Software Factories', in Japanese Perspectives in Software Engineering, (Ed. Matsumoto, Y. and Ohno, Y.), Addison-Wesley, 1989. [McCabe 1976] McCabe, T. 'A Complexity Measure', IEEE Transaction sin Software Engineering, Volume SE-2 (4), (pp308-320) [McCabe 1976] McCabe, T., Butler, C., Charles, W. 'Design Complexity Measurement and Testing', Communications of the ACM, Vol. 32 (12) (pp. 1415-1424) [McCall 1977] McCall, J. A., Richards, P.K., and Walters, G.F., ‘Factors in Software Quality’, RADC TR-77-369, Us Rome Air Development Center Reports [McGregor 1992] McGregor, J.D., Sykes, D.A. "Object-Oriented Software Development: Engineering Software for Reuse", Van Nostrand Reinhold, 1992 [Mohamed 1992] Mohamed W. A., and Sadler, C.J., `Methodology Evaluation: A Critical Survey', pp. 101 to 112, Proceedings of Eurometrics'92 Conference on Quantitative Evaluation of Software & Systems, Brussels, April 1992. [Mumford 1979] Mumford, E. Weir, M.: "Computer Systems in Work Design - The ETHICS Method", Associated Business Press, 1979 [Mumford 1988] Mumford. E. "Defining systems requirements to meet business needs: a case study example", Computer Journal, 28,2 [Paulk 1993] Paulk M. C., Curtis B., Chrissis M. B. ' Capability Maturity Model', Version 1.1 IEEE Software July 1993, pp 19-27, 1993 [Pfleeger 1998] Pfleeger, L. S.: "Software Engineering, Theory and Practice", Prentice Hall, 1998 [Pressman 2000 Pressman, R. "Software Engineering - A practitioner's approach", McGrawHill, European Edition [Shepperd 1995] Shepperd, M.J. Foundations of Software Measurement Prentice-Hall: Hemel -------------------------------------------------------------------page: 26

Hempstead, England [Shepperd 1996] Shepperd, M.J. Schofield, C. Kitchenham, B. 'Effort estimation using analogy', 18th IEEE Intl. Softw. Eng. Conf. [Siakas 1997] Siakas, K. , Berki, E.,Georgiadou, E., Sadler, C. "The complete alphabet of quality software systems" , to be presented at the 7th World Congress for Total Quality Management, New Delhi, India [Siakas 2000] Siakas Kerstin V., Georgiadou E., Sadler, C. “ Software Quality Management from a Cross Cultural Viewpoint”Software Quality Journal, December 2000 [Sommerville 2001] Sommerville I. Software Engineering (6 th ed.) Pearson Education [Stapleton 1995] Stapleton, J. "RAD", SQM'95, Seville, 1994 Keynote Address [Weyuker 1993] Weyuker E. Can We Measure Software Testing Effectiveness?" Proceedings of IEEE-CS International Software Metrics Symposium, May 1993, pp. 100-107 [Wood-Harper 1985] Wood-Harper, A.T., Antill, L. Avison, D.E.:"Information systems definition: the multiview approach", Blackwell, 1985 [Zahran 1998] Zahran Sami (1998) Software process Improvement, Practical Guidelines for business Success, Software Engineering Institute, SEI Series in Software Engineering AddisonWesley Longman, UK [Hamphrey 1997] Humphrey Watts S. (1997) Introduction to the Personal Software Process SEI Series in Software Engineering, Software Engineering Institute, Carnegie Mellon University Addison Wesley Longman, Inc.

Bootstrap Haase Volkmar and Messnar Richard. Bootstrap. Fine-Tuning Process Assessment IEEE Software, July 1994, pp. 25-35, 1994

Background Reading Comparisions Paulk Mark C. Comparing ISO 9001 and Capability Maturity Model for Software Software Quality Journal 2, 1993, pp. 245 –256, 1993 Järvinen J. On comparing process assessment results: BOOTSTRAP, and CMM Software Quality Management, SQM94 in Edinburgh, pp 247-61,-94

Quality Attributes Siakas, K., Berki, E., Georgiadou, E. Sadler, C. “The complete alphabet of quality software systems: Conflicts and compromises”, World Conference on Total Quality Management (New Delhi, India, Feb. 1997) TQM Kondo Yoshio. Importance of Employee Motivation in TQM: 5th World Congress on Total Quality, New Delhi, Feb.1995, pp.46 –52

-------------------------------------------------------------------page: 27

CMM

Zahran Sami (1998) Software process Improvement, Practical Guidelines for business Success, Software Engineering Institute, SEI Series in Software Engineering AddisonWesley Longman, UK Humphrey Watts S. (1997) Introduction to the Personal Software Process SEI Series in Software Engineering, Software Engineering Institute, Carnegie Mellon University Addison Wesley Longman, Inc. Humphrey W. A Discipline for Software Engineering, Addison Wesley, USA, 1995 Humphrey Watts S. Managing the Software Process, Addison Wesley, 1989

Quality Models (Cont.) P-CMM Curtis Bill, Heflleey William E, Miller Sally. Overview of the People Capability Maturity Model, CMU/SEI_95-MM-01, 1995 Jack, Rickard. Personal Issues in Software Cost Estimation, 5

th

Software Quality Conference, 9-

11 July, Dundee, 1997 Bootstrap Haase Volkmar H.Bootstrap - Measuring Software Management Capabilities, First Findings in Europe, the fourth IFAC/IFIP Workshop, Austria, May, 1992

Kuvaja Pasi: New Developments in Software Process Improvement, Keynotes in the SQM'99 Conference , Southampton March 1999 SPICE

Dorling Alec: Spice: Software Process Improvement and Capability dEtermination Software Quality Journal, 2, 1993, pp. 209-224 Rout Terence P.: SPICE: A Framework for Software Process Assessment in Software ProcessImprovement and Practice, Pilot Issue 1995, pp. 57-66,

-------------------------------------------------------------------page: 28

Siakas, K., Berki, E., Georgiadou, E. Sadler, C. “The complete alphabet of quality software systems: Conflicts and compromises”, World Conference on Total Quality Management (New Delhi, India, Feb. 1997)

-------------------------------------------------------------------page: 29

Software Process and Product Improvement A ...

In the early years Software Engineering adopted an end of cycle quality inspection just as the early ... That will give rise to the company's reputation and.

140KB Sizes 4 Downloads 141 Views

Recommend Documents

Process-Mapping-Process-Improvement-And-Process-Management ...
There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps.

PDF Technical Communication: Process and Product
PDF Technical Communication: Process and Product

Read Technical Communication: Process and Product - Download
Read Technical Communication: Process and Product - Download

Joint Product Improvement by Client and Customer ...
all hardware and software requirements for the IT architecture needed by Bharti ... port center share data on call volumes or e-mails answered in real-time and also ... for the effective governance of these joint product improvement partnerships.