00_FM_4782

2/5/07

10:27 AM

Page iii

Testing SAP R/3 A Manager’s Step-by-Step Guide

JOSE FAJARDO ELFRIEDE DUSTIN

John Wiley & Sons, Inc.

00_FM_4782

2/5/07

10:27 AM

Page ii

00_FM_4782

2/5/07

10:27 AM

Page i

Testing SAP R/3 A Manager’s Step-by-Step Guide

00_FM_4782

2/5/07

10:27 AM

Page ii

00_FM_4782

2/5/07

10:27 AM

Page iii

Testing SAP R/3 A Manager’s Step-by-Step Guide

JOSE FAJARDO ELFRIEDE DUSTIN

John Wiley & Sons, Inc.

00_FM_4782

2/5/07

10:27 AM

Page iv

This book is printed on acid-free paper.

Copyright © 2007 by John Wiley & Sons, Inc. All rights reserved. Wiley Bicentennial Logo: Richard J. Pacifico Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201748-6008, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services, or technical support, please contact our Customer Care Department within the United States at 800-762-2974, outside the United States at 317-572-3993 or fax 317-572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. For more information about Wiley products, visit our Web site at www.wiley.com.Anand, Sanjay. Sarbanes-Oxley guide for finance and information technology professionals / Sanjay Anand.

Fajardo, Jose, 1974– Testing SAP R/3 : a manager’s step-by-step guide / Jose Fajardo, Elfriede Dustin. p. cm. Includes index. ISBN: 978-0-470-05573-1 (cloth : acid-free paper) 1. SAP R/3—Testing. 2. Business enterprises—Computer programs—Testing. 3. Client/server computing. I. Dustin, Elfriede. II. Title. HF5548.4.R2F34 2007 650.0285’53—dc22 20060366512005031928 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

00_FM_4782

2/5/07

10:27 AM

Page v

This book is dedicated to the loving memory of my mother, Maria T. Arregoces

00_FM_4782

2/5/07

10:27 AM

Page vi

00_FM_4782

2/5/07

10:27 AM

Page vii

Contents

CHAPTER 1 Introduction

1

CHAPTER 2 Status Quo Review of Existing Testing Practices

19

CHAPTER 3 Requirements

35

CHAPTER 4 Estimating Testing Costs

61

CHAPTER 5 Functional Test Automation

69

CHAPTER 6 Test Tool Review and Usage

91

CHAPTER 7 Quality Assurance Standards

171

CHAPTER 8 Assembling the QA/Test Team

187

CHAPTER 9 Establishing and Maintaining Testing Resources

219

CHAPTER 10 Planning and Construction of Test Cases

231

CHAPTER 11 Capacity Testing

243

CHAPTER 12 Test Execution

267

vii

00_FM_4782

2/5/07

10:27 AM

Page viii

viii

Contents

CHAPTER 13 Management of Test Results and Defects

285

CHAPTER 14 Testing in an SAP Production Environment

299

CHAPTER 15 Outsourcing the SAP Testing Effort

319

APPENDIX A Advanced Testing Concepts

333

APPENDIX B Case Study: Accelerating SAP Testing

355

Index

365

00_FM_4782

2/5/07

10:27 AM

Page ix

About the Authors

ose Fajardo is a former SAP consultant of PricewaterhouseCoopers LLP and Computer Sciences Corporation (CSC), he has worked as an independent SAP consultant for Fortune 100 companies utilizing automated testing strategies and in particular implementing SAP R/3. His competency in automated test tools includes products from Mercury Interactive as well as Rational Corporation. With subject matter expertise in validating and managing testing of ERP systems, Fajardo has participated in verification of customized implementations of SAP R/3, SAP R/3 bolt-ons, custom applications, and non-SAP applications interfacing with SAP R/3. Fajardo has been instrumental in guiding and mentoring Fortune 100 companies in the development of testing strategies and methodologies, creating testing standards, documenting Test Readiness Review checklists, documenting entrance/exit/release criteria, implementing testing best practices, creating quality assurance (QA) teams and QA processes, mentoring junior programmers, staffing testing efforts with resources, performing verification and validation activities, managing outsourcing agreements, preparing for audits of testing results, managing the execution of test scripts, and implementing automated testing strategies. Fajardo has published several articles on automation strategy, performance testing, regression testing, functional testing, implementing testing best practices, and testing standards and procedures.

J

Elfriede Dustin is author of Effective Software Testing and lead author of Automated Software Testing and Quality Web Systems, books that have been translated into various languages and have sold tens of thousands of copies throughout the world. Her latest book, ix

00_FM_4782

2/5/07

x

10:27 AM

Page x

About the Authors

The Art of Software Security Testing, coauthored with security experts Chris Wysopal, Lucas Nelson, and Dino Dai Zovi, was published by Symantec Press in November 2006. Dustin has also authored various white papers on the topic of software testing, teaches various testing tutorials, and is a frequent speaker at various software testing conferences. She is the cochair of VERIFY, an international software testing conference held in the Washington, D.C., area. In support of software test efforts, Dustin has been responsible for implementing automated test, or has performed as the lead consultant/manager guiding the implementation of automated and manual software testing efforts. Dustin has a BS in computer science with over 15 years of information technology experience and currently works as an independent consultant in the Washington, D.C., area. You can reach her via her Web site at www.effectivesoftwaretesting.com or at [email protected].

00_FM_4782

2/5/07

10:27 AM

Page xi

About the Contributors

Lorrie Collins is a national solutions director for Spherion Corporation. She leads the Software Quality Management Practice, which provides Quality Assurance, Validation and Testing, and Test Automation services to help clients maximize their technology investments. Collins is certified in information technology (IT) project management and has over 20 years of IT experience across numerous industries, technical platforms, and environments. Bob Koche began his career developing software and evolved to a writer and speaker on software development practices. As a software entrepreneur he is associated with a number of category firsts including the first SQL database on a PC (acquired by IBM), the first Web QA tool (acquired by Microsoft), and now the first SAP-centric test automation tool. Linda G. Hayes is the CTO of WorkSoft, Inc., developer of nextgeneration test automation solutions. She is the founder of three software companies including AutoTester, the first PC-based test automation tool. Hayes holds degrees in accounting, tax, and law and is a frequent industry speaker and award-winning author on software quality. She has been named as one of Fortune magazine’s People to Watch and one of the Top 40 under 40 by Dallas Business Journal. She is a columnist for ComputerWorld, Datamation, and StickyMinds.com; authored the Automated Testing Handbook; and coedited Dare to Be Excellent with Alka Jarvis on best practices in the software industry. Her article “Quality Is Everyone’s Business” won a Most Significant Contribution award from the Quality Assurance Institute and was published as part of the Auerbach Systems Development Handbook. You can contact her at [email protected].

xi

00_FM_4782

2/5/07

10:27 AM

Page xii

00_FM_4782

2/5/07

10:27 AM

Page xiii

Preface

lanning, preparing, scheduling, and executing SAP test cycles is a time-consuming and resource-intensive endeavor that requires participation from several project members. SAP projects are prone to have informal, ad-hoc test approaches that decrease the stability of the production environment and tend to increase the cost of ownership for the SAP system. Many SAP project and test managers cannot provide answers for questions such as how many requirements have testing coverage, the exit criteria for a test phase, the audit trails for test results, the dependencies and correct sequence for executing test cases, or the cost figures for a previously executed test cycle. Fortunately, through established testing techniques predicated on guidelines and methodologies (i.e., ASAP SAP Roadmap methodology, IBM’s Ascendant methodology, and Deloitte’s ThreadManager methodology), enforcement of standards, application of objective testing criteria, test case automation, implementation of a requirements traceability matrix (RTM), and independent testing and formation of centralized test teams, many of the testing risks that plague existing or initial SAP programs can be significantly reduced. This book is written for SAP managers, SAP consultants, SAP testers, and team leaders who are tasked with supporting, managing, implementing, and monitoring testing activities related to test planning, test design, test automation, test tool management, execution of test cases, reporting of test results, test outsourcing, planning a budget for testing activities, enforcing testing standards, and resolving defects. The book revisits testing standards and techniques supported by the software engineering institute, the Institute of Electrical and Electronics Engineers, and Unified Modeling Language (UML), which

P

xiii

00_FM_4782

2/5/07

xiv

10:27 AM

Page xiv

Preface

have dominated the landscape for producing software-based applications. The book provides the reader with information for incorporating proven software testing standards and techniques when planning a major SAP testing cycle for either an SAP upgrade or initial SAP installation. The methods and techniques described in this book offer the reader a different (not new) way to look at SAP testing deliverables. The approaches and methodologies advocated in the book for SAP testing are recommended for teams of people involved in different aspects of SAP testing. Typically, in SAP implementations there is much confusion and obfuscation in determining which project resources are responsible for testing tasks, and test results in particular, for testing cycles such as performance and user acceptance testing, and the book addresses this prevalent problem. A major SAP test cycle such as an integration, regression, or performance test may require the expertise and contributions of subject matter experts (SMEs), business analysts, system architects, integration managers, functional team leaders, and Basis team, test team, and development (advanced business application programming [ABAP]) team members. The book aims to logically define and identify the roles and responsibilities for all expected stakeholders affiliated with an SAP test cycle and makes arguments in favor of adopting the concept of a centralized test teams and enforcing quality assurance (QA) standards. The book also provides much needed industry guidance for companies that want to establish an automated testing framework, construct an RTM, adhere to industry regulations (i.e., SarbanesOxley), participate in an outsourced agreement for SAP testing, reduce and compress the testing schedule with the concept of orthogonal arrays, and learn about recent trends in SAP testing vis-àvis the concept of SAP accelerators. The methodology presented in these pages is not offered as a panacea for SAP testing. It is simply a reiteration of powerful, straightforward, proven testing techniques and approaches for every aspect of SAP testing.

01_4782

2/5/07

10:29 AM

Page 1

CHAPTER

1

Introduction AP testing is complex, difficult, and esoteric. The perils and risks to the intended SAP production system are maximized when the project team does not have enough skilled testers and a robust approach for testing the system, tracing the entire system design and architecture to testable requirements, and eliminating defects. A comprehensive plan or approach for testing an SAP system even for a small project includes assembling a test team, acquisition of test tools, establishment of a test lab, construction of a test calendar, monitoring of testing activities, training for testing participants, and completion of a test cycle based on predefined criteria. A test plan for SAP includes the approach; description; roles and responsibility for conducting usability testing; white box and black box testing for interfaces; negative testing; security testing for roles, integration, scenario, unit, and performance; user acceptance testing; and regression testing. Few companies implementing SAP are structured or organized to address all these various types of testing. This book offers guidance and assistance to project managers, test managers, and configuration leaders who want to establish a testing framework from the ground up based on industry-established principles for evaluating requirements, providing coverage for requirements, diagramming processes, test planning, estimating testing budgets and resource allocation, establishing an automation framework, and reviewing case studies from large SAP implementations.

S

WHY THIS BOOK? SAP is by far the world’s largest enterprise resource planning (ERP) application, and this position is not likely to be relinquished anytime soon. SAP traces its origins to its mainframe-based R/2 version from 1

01_4782

2/5/07

2

10:29 AM

Page 2

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

the 1970s. Even though SAP and other vendors have developed implementation methodologies for SAP that provide emphasis for activities needed to design the system and for going live, many of these methodologies fall short of addressing robust testing practices for SAP in particular in light of the prominence of automated test tools and outsourcing testing activities. The roles and activities for SAP configuration, for SAP advanced business application programming (ABAP) development, and SAP Basis are for the most part well understood and clearly defined. In contrast, when it comes to SAP testing, the roles and activities are a mystery and subject to interpretation. It is possible for a person who is entertaining becoming an SAP consultant to take courses on how to configure a particular SAP module, how to build security roles, or how to develop ABAP programming code but not how to test SAP R/3. Functional testing of SAP is often left to individuals without a testing background whose main tasks are to configure the system. Other types of SAP testing such as technical and system testing for system performance and backup and recovery are also left to individuals without a testing background whose main responsibilities are to design and maintain the technical architecture for SAP. This book was written to help SAP projects address weaknesses in the SAP testing life cycle, define testing and quality assurance activities, and overcome misconceptions about SAP testing. The book contains contributions from industry leaders in the fields of SAP testing, test tool automation, and templates to help project managers and test managers establish immediate testing best practices. It covers all aspects of SAP testing from preparation to resolution of defects.

WHAT DOES THIS BOOK OFFER ME? This book is written from the point of view of the company or entity requesting SAP services from a systems integrator. The book emphasizes testing practices predicated on the following principles: ■ ■

Building a system with quality as opposed to merely testing for quality. Adhering to testing practices from SAP’s ASAP Roadmap Methodology, IBM’s Ascendant™ methodology and guide for imple-

01_4782

2/5/07

10:29 AM

Introduction

■ ■ ■ ■ ■ ■ ■ ■

■ ■ ■ ■ ■ ■



Page 3

3

menting SAP, Institute of Electrical and Electronics (IEEE) standards, and the Capability Maturity Model (CMM) from the Software Engineering Institute (SEI). Drafting of clearly stated requirements that can be validated with test cases. Construction of a requirements traceability matrix (RTM) to verify all in-scope requirements. Supporting each testing effort with an exit, entrance, or suspension criterion. Validation of production support changes through thorough regression testing. Subjecting all test results to third-party verification and approval (sign-offs) from appropriate project stakeholders. Diagramming processes and requirements with Unified Modeling Language (UML) notation. Documentation and adherence to test plans and test strategies that are subjected to version control. Early formation of a test team that participates in design workshops during the blueprint phase, change control boards meetings, and the “go/no go” decision. Inclusion and enforcement of quality assurance (QA) standards. Peer reviews and inspections for testable requirements. Independent verification and validation of system design. Independent “hands-on” testing with participation from end users who execute test cases. Functional testing with manual testing and automated test tools. Maintaining testing deliverables such as test cases, test results, and testing defects in a test management tool that includes security features and audit trails. Compliance with company and industry audits.

The book is a primer for testing SAP from unit testing through regression testing for production support. The intended audience for the book includes test managers, project managers, integration leaders, test team members, QA personnel, and auditing groups. The book provides templates, case studies, and criteria to establish a framework for SAP testing, establish a test case automation strategy, mentor junior testers, and identify tasks and activities that the SAP system integrator is expected to accomplish for the client requesting

01_4782

2/5/07

4

10:29 AM

Page 4

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

SAP services. Specifically, this book will address the following activities that are prevalent yet poorly conducted at most SAP projects: ■ ■

■ ■ ■ ■

■ ■ ■ ■ ■ ■ ■

■ ■ ■

■ ■ ■

Identifying the testable requirements. How to ensure that requirements are testable, unambiguous, clearly defined, necessary, prioritized, and consistent with corporate policies or industry standards. How to retain testing documentation to support audits (i.e., Section 404 from Sarbanes-Oxley). Defining the scope of testing. Creating a test plan and test strategy. Developing a strategy for acquiring automated test tools and for automating processes that includes verification at the graphical user interface (GUI) and back-end layers. How to create a library of automated test cases for production regression testing that can be executed unattended. How to define and verify service-level agreements (SLAs). Boundary testing for negative and positive testing. How to apply quality standards from the SEI and Rational Unifying Process (RUP). Creating robust flow process diagrams that include narratives. Techniques for estimating the testing schedule, duration for testing activities, and number of testers needed. How to compress or reduce the necessary number of test cases with the technique of orthogonal arrays (OATS) for projects implementing SAP variant configuration or projects that have multiple variations for end-to-end processes and are time constrained. Defining criteria for deciding which processes or scenarios are suitable for testing. How to estimate testing costs and budget. Creating an RTM to ensure that coverage has been provided for all types of testable requirements (i.e., functional, security, performance, archiving, technical, and development requirements). Creating a test schedule and a test calendar. Defining objective criteria to commence, finish, and stop testing. Managing, categorizing, and resolving test results.

01_4782

2/5/07

10:29 AM

Page 5

Introduction

5

CHALLENGES IN SAP TESTING Established commercial implementation methodologies for SAP typically fail to address how requirements will be met, the criteria for testing, the framework for utilizing test tools, necessary resources for testing, estimating testing budgets, specific testing roles and responsibilities, and how test defects will be managed and resolved. Furthermore, many factors hamper successful testing at most SAP projects such as unclear requirements, inability to trace the system design to requirements, missing dedicated test teams, waiving defects without appropriate workarounds, and inadequate involvement of needed participants for testing such as subject matter experts (SMEs) for capturing requirements and end users for user acceptance testing. Despite these testing challenges, many SAP project managers perceive that their SAP implementation is successful or “fine” even when the production help desk team is flooded with complaints that the system does not perform necessary functionality, the production system does not meet intended performance SLAs, security roles are not defined and implemented correctly, the system produces short dumps because it cannot perform exception handling or not enough negative testing was conducted, data is not converted properly from legacy systems, and end users cannot find even the most basic data or necessary reports. The SAP arena is replete with functional, development, and technical consultants that moonlight and parade as SAP testers for various testing efforts but often lack sufficient knowledge to establish a successful testing strategy and framework. What is more puzzling and baffling at SAP projects is that it is the individuals with the least amount of knowledge and skills in the area of testing who are the ones in charge of leading and managing the testing effort since many SAP projects do not have dedicated test managers or centralized test teams. Admittedly, testing at any SAP project is an integrated effort that requires the expertise and skills of several resources such as SMEs, functional configuration resources, ABAP developers, and business analysts. Yet executing testing activities without the guidance and help of testing professionals is analogous to taking a trip without knowing what the final destination will be.

01_4782

2/5/07

6

10:29 AM

Page 6

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Frequently, an individual moonlighting as an SAP tester will state that “testing is breaking or exploring the system” or that he “knows how to test,” which undoubtedly leads to a misconception about what SAP testing is really all about. The truth is that many companies fail to adequately test an SAP system and rather deploy the system into production because testing is taking too long, which consequently forces the production support team to fix the system for the first six to eight months after the system is deployed because it was never properly tested prior to its release into the production environment. Whether by accident or on purpose, often the modus operandus for many corporations is to deploy an unstable and/or poorly tested SAP system into production because defects and system problems can be dealt with at a later date in the production environment, even while there is substantial and empirical data that demonstrates that removing system defects is least expensive when done in the early stages of testing. Industry data shows that removing system defects in a live production environment is at least 20 to 40 times more expensive than doing so in the unit-testing phase or during the requirementsgathering phase. Many defects can be eliminated or prevented altogether with thorough evaluation and peer review of requirements. Many corporations pay expensive consulting fees to fix production problems arriving at the production help desk rather than address these problems or defects during the applicable testing phase. The main reason that this occurs is that SAP projects often do not spend the time or have the appropriate resources to ensure that the captured requirements are peer reviewed and evaluated with objective criteria, or to construct an RTM to provide coverage for all requirements and establish objective testing criteria for each testing phase. Another critical or overlooked reason that causes defects that should have been resolved during testing to slip into the production environment is that individuals acting as SAP testers cannot reach consensus on testing nomenclature or the test approach. The mere term testing in the SAP world is in and of itself broad enough to create ambiguity, since different individuals will have different perceptions and experiences about what testing means. Testing encompasses many activities such as requirements gathering and traceability, test planning, test design, test execution, test reporting,

01_4782

2/5/07

10:29 AM

Introduction

Page 7

7

test results, and resolution of defects to cover a wide range of testing efforts such as unit, boundary, scenario, development, white/black box, security, smoke, integration, performance, user acceptance, and regression testing. Rarely, if ever, do two or more individuals from the configuration, development, or technical teams have the same nomenclature or understanding for a particular type of test. Chaos and inconsistency are the ensuing results from misunderstanding about what the term testing entails or what activities are associated with testing. Dedicated test teams can establish consistency for all testing terms for all project members based on established guidelines and nomenclature from credible sources such as the Software Engineering Institute (SEI), IEEE standards, and the SAP ASAP Roadmap methodology. A common theme repeated at many SAP projects is that conclusive evidence is missing to show that requirements have been met before releasing the system into the production environment. Most project managers or functional managers cannot answer with any degree of confidence or objectively whether the in-scope requirements captured during the requirements phase have been met before releasing or deploying a system. This occurs because the concept of an RTM is not embedded within most, if not all, of the mainstream or conventional methodologies for implementing SAP for either initial SAP implementations or SAP upgrades. Test tools pose a challenge for many SAP projects. SAP tools hold the promise of unattended test case playback at any time, increased testing coverage, testing of processes with multiple data and process variations, verification of objects and calculations, and generation of automated test logs with time stamps for audit purposes and compliance. Many SAP system integrators and test tool vendors are adept at convincing companies to spend hundreds of thousands of dollars in acquiring automated test tools and test tool training only to have the test tools gather dust. Test tools can sit idle because the company acquiring the test tools is missing an automation framework and thus cannot successfully engage the appropriate resources to maintain, install, and utilize the test tools. The payback period or return on investment (ROI) for test tools is not maximized or even reached until a series or library of automated test cases can be constructed and reused frequently for future system releases or to support production

01_4782

2/5/07

8

10:29 AM

Page 8

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

regression testing to the point where automated execution is cheaper than doing the same tasks with manual labor hours. Constructing a library of automated test cases is rarely achieved even by companies that have had SAP in the production environment for years because they do not allocate the necessary skilled resources to maintaining and utilizing the test tools. Companies that commit hundreds of thousands of dollars to the acquisition of test tools that sit idle compromise their testing budgets. Consequently, these companies resort to testing SAP exclusively with manual testers. Another common challenge for testing SAP is inadequate training at all levels for either cross-matrixed testing resources or dedicated testing resources. Training is needed for testers who are participating in one-time testing efforts such as user acceptance testing, or participating in all testing efforts for execution of test cases and resolution of defects. The test manager needs to develop the procedures for mentoring and educating all project resources who are expected to participate in testing activities. Training consists of the following activities: ■

■ ■ ■ ■

Training dedicated testers on how to maintain and install automated test tools, test management tools, and develop automated test cases. Training testing participants on test procedures for logging defects and reporting test results. Training on how to evaluate and peer review requirements. Training on testing nomenclature to standardize testing terms for all project members. Training for roles and responsibilities for resolving defects.

The challenges mentioned above are some of the most prevalent problems and issues that permeate most SAP projects. By no means are these the only challenges present at SAP projects. Many SAP projects suffer from poor documentation and configuration management for testing deliverables or work products, and inability to successfully meet audits or design a solution that is in compliance with industry regulations and requirements. The aforementioned challenges are used as illustrations that highlight the need to establish robust testing techniques, methodologies, strategies, and frameworks.

01_4782

2/5/07

10:29 AM

Page 9

Introduction

9

EARLY TESTING MATTERS It is never too early to implement and establish the testing program. In fact, for readers familiar with the SAP ASAP Roadmap methodology, Exhibit 1.1 shows that testing strategies are defined as early as the project preparation phase. For readers familiar with the different software development life cycles including the waterfall model, the software industry has developed a similar model known as the Vshaped model that emphasizes testing as a consideration throughout the development life cycle. Furthermore, industry standards suggest and manifest that testing early helps to decrease costs since identifying and resolving defects early on during the initial software development life cycle is much more economical than troubleshooting and resolving defects once the system has been deployed into the production environment. Testing early and often is instrumental to reducing development costs, ensuring fulfillment of in-scope requirements, and aligning with the project’s scope statement. The most effective testing programs start at the beginning of a project, long before any program code has been written. The requirements documentation is verified first; then, in the later stages of the project, testing can concentrate on ensuring the quality of the application code. Expensive reworking is minimized by eliminating requirements-related defects early in the project’s life, prior to detailed design or coding work. The requirements specifications for a software application or system must ultimately describe its functionality in great detail. Typically in SAP, requirements for initial implementations are captured during the blueprint phase with workshops where various stakeholders state what they expect SAP to accomplish for them or for existing SAP implementations that are undergoing major upgrades or implementing previously deferred requirements. One of the most challenging aspects of requirements development is communicating with the people who are supplying the requirements. Each requirement should be stated precisely and clearly, so it can be understood in the same way by everyone who reads it. If there is a consistent way of documenting requirements, it is possible for the stakeholders responsible for requirements gathering to effectively participate in the requirements process. As soon as a requirement is made visible, it can be tested and clarified by asking the

Phase

• Create Checklists for evaluating specifications and diagrams • Establish Test lab with installed test tools • Construct RTM (requirements traceability matrix) • Test Plan and Criteria • Functional Design Specs • Technical Design Specs • Flow process Diagrams

• Testing strategy paper • Automation standards • Quality Assurance Plan • Established Project Methodologies, Tools and Governance Standards

• Testing Strategy White papers • • • • •

• • • • • • • •

• • • • • • • • • • •

Final Preparation

• Continuous Improvement • Define regression testing strategy • Define change control processes • Automate test scripts • Modify existing test scripts • Participate in CCB meetings • Execute test cases • Document test findings • Support test tools • Support SOX

Go-Live Support

• Quicktest Pro • Kintana

• Automation framework • Execute stress/ • Regression testing paper load/volume/ performance testing • Refined library of Automated test scripts • Generate and produce system testing scripts • Gather and interpret system testing results • Test report

BPPs with test conditions • Loadrunner Test scenario template BPML Stress testing strategy Stress/Volume testing sample plans

Baseline test case Test Readiness Review Test Cases Test Results Test report Developed automated scripts Execution calendar Lessons Learned

Define Baseline Test Cases • Conduct System Create test plan for baseline Testing Kick-off presentation Test baseline Define final scope test cases Create test plan for final scope Test final scope Conduct development testing Conduct integration testing Prepare for system testing UAT preparation and execution

Realization

Note: During realization phase, 50% or more of all project costs are dedicated to testing activities.

• Present QA standards and processes • Review requirements • Attend workshops for gathering requirements • Conduct requirements peer reviews • Enforce QA processes • Assemble test team • Set up test lab • Procure test tools • Test tool training for functional users • Customize test tools

Blueprint

• Define testing strategies

Project Preparation

EXHIBIT 1.1 Testing and Quality Assurance Activities for an Initial SAP Implementation

Activities

10

10:29 AM

Deliverables

2/5/07

Tools

01_4782 Page 10

01_4782

2/5/07

10:29 AM

Page 11

Introduction

11

stakeholders detailed questions. Whether your team develops requirements using UML and some form of use case or writing “the system shall” statements, a variety of requirement tests can be applied to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning. UML is a widely accepted technique for requirements gathering or reverse engineering an existing system, and its notation is composed of multiple diagramming techniques such as use-case notation, activity, class and sequence diagrams, and so on. In order to introduce the concept of “test early and test often,” it is important to recognize the following two items: (1) involve testers from the beginning and (2) verify the requirements.

Involve Testers from the Beginning 1 Testers need to be involved from the beginning of a project’s life cycle so they can understand exactly what they are testing and can work with other stakeholders to create testable requirements. Not only can testers verify testability of the requirement, but they will also learn the thought process that went into the requirement as it applies to the application under test (AUT), making the tester more knowledgeable about the AUT. A defect occurs when an executed test case produces test results that do not match the expected test results. Defect prevention is the use of techniques and processes that can help detect and avoid errors before they propagate to later development phases. Defect prevention is most effective during the requirements phase, when the impact of a change required to fix a defect is low. The only modifications will be to requirements documentation and possibly to the testing plan, also being developed during this phase. If testers (along with other stakeholders) are involved from the beginning of the development life cycle, they can help recognize omissions, discrepancies, ambiguities, and other problems that may affect the project requirements’ testability, correctness, and other qualities.

1

Adapted from Elfriede Dustin, Effective Software Testing, Reading, MA: Addison Wesley, 2002.

01_4782

2/5/07

10:29 AM

Page 12

12

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

A requirement can be considered testable if it is possible to design a procedure in which the functionality being tested can be executed, the expected output is known, and the output can be programmatically or visually verified. Testers need a solid understanding of the product so they can devise better and more complete test plans, designs, procedures, and cases. Early test-team involvement can eliminate confusion about functional behavior later in the project life cycle. In addition, early involvement allows the test team to learn over time which aspects of the application are the most critical to the end user and which are the highest-risk elements. This knowledge enables testers to focus on the most important parts of the application first, avoiding overtesting rarely used areas and undertesting the more important ones. Some organizations regard testers strictly as consumers of the requirements and other software development work products, requiring them to learn the application and domain as software builds are delivered to the testers, instead of involving them during the earlier phases. This may be acceptable in smaller projects, but in complex environments it is not realistic to expect testers to find all significant defects if their first exposure to the application is after it has already been through requirements, analysis, design, and some software implementation. More than just understanding the “inputs and outputs” of the software, testers need deeper knowledge that can come only from understanding the thought process used during the specification of product functionality. Such understanding not only increases the quality and depth of the test procedures developed, but also allows testers to provide feedback regarding the requirements.

Verify the Requirements In his work on specifying the requirements for buildings, Christopher Alexander describes setting up a quality measure for each requirement: “The idea is for each requirement to have a quality measure that makes it possible to divide all solutions to the requirement into two classes: those for which we agree that they fit the requirement and those for which we agree that they do not fit the requirement.” In other words, if a quality measure is specified for a requirement, any

01_4782

2/5/07

10:29 AM

Introduction

Page 13

13

solution that meets this measure will be acceptable, and any solution that does not meet the measure will not be acceptable. Quality measures are used to test the new system against the requirements. Attempting to define the quality measure for a requirement helps to eliminate requirements not suitable for implementation and thus testing. For example, everyone would agree with a statement like “the system must provide good value,” but each person may have a different interpretation of “good value.” In devising the scale that must be used to measure “good value,” it will become necessary to identify what that term means. Sometimes requiring the stakeholders to think about a requirement in this way will lead to defining an agreed-upon quality measure. In other cases, there may be no agreement on a quality measure. One solution would be to replace one vague requirement with several unambiguous requirements, each with its own quality measure. It is important that guidelines for requirement development and documentation be defined at the outset of the project. In all but the smallest programs, careful analysis is required to ensure that the system is developed properly. Use cases from UML notation are one way to document functional requirements, and can lead to more thorough system designs and test procedures. (Here, the broad term requirement will be used to denote any type of specification, whether a use case or another type of description of functional aspects of the system.) In addition to functional requirements, it is also important to consider nonfunctional requirements, such as performance and security, early in the process. They can determine the technology choices and areas of risk. Nonfunctional requirements do not endow the system with any specific functions, but rather constrain or further define how the system will perform any given function. Functional requirements should be specified along with their associated nonfunctional requirements. Chapter 3 offers a checklist that can be used by testers during the requirements phase to verify the quality of the requirements. Using this checklist is a first step toward trapping requirements-related defects as early as possible, so they don’t propagate to subsequent phases, where they would be more difficult and expensive to find and correct.

01_4782

2/5/07

10:29 AM

Page 14

14

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

TYPES OF SAP TESTS Traditionally, the main types of SAP tests include unit, development, scenario, integration, performance, and regression testing. These tests are further described below to provide greater granularity into what each type of test entails.

Unit Testing This is the lowest level of testing at the SAP transaction level. Unit testing includes boundary testing for positive and negative testing. Negative testing should be performed for custom fields and transactions to ensure that the system only allows valid input and can adequately perform exception handling. An example of a negative test for a process would be attempting to process an order with the wrong status. Unit testing includes testing security roles. The configuration team owns the unit-testing effort and is responsible for planning and execution of unit testing. The main focuses for unit testing are: ■ ■ ■ ■

Master data Negative-positive testing Transaction functionality Security roles and profiles

Negative testing is performed on security roles and profiles, custom fields, objects, and processes. Each test in negative testing needs two elements: 1. Intentionally specify conditions that will cause the software to generate an error. 2. Ensure that the generated error is handled in a specified manner. An example of a negative test condition would be “Attempting to post a material to an invalid profit center should produce an error message.” Another negative testing example for security roles and segregation of duties would be “An inventory clerk attempts to ap-

01_4782

2/5/07

10:29 AM

Page 15

Introduction

15

prove a million-dollar purchase order when he is only permitted to approve purchase orders for a maximum of $500,000.” Negative testing will be designed to address the following situations: ■ ■ ■ ■ ■

Check exception handling and error message. Prove that the system will deal with program exceptions and erroneous data. Limit or prevent an end user from trying to do something he should not. Demonstrate that the system does not do anything that it is not supposed to do. Users are permitted to perform only actions based on their authorizations, position roles, and permissions.

Development Testing This is the testing for reports, interfaces, conversions, enhancements, work flows, and forms (RICEWF) development objects developed primarily with ABAP code. Testing of development objects includes testing for security authorizations, performance, extracts, data transfer rules, reconciliations, and batch scheduling jobs. In many SAP projects, third-party tools such as Control-M and AutoSys are acquired to schedule reports and interfaces with dependencies, and these scheduled jobs need to be tested prior to releasing the system into the production environment. Development testing should also ensure that data can be tested through the intended target system. The owner(s) of the target system can specify the applicable or representative sets of data needed to test interfaces and conversions, which allows the development or ABAP team to conduct white box and black box testing on ABAP programs. The development or ABAP team is responsible for planning and executing the development tests, but the configuration team is responsible for approving the results for the development tests. Development testing ensures that the interfaced data originating from legacy systems can be effectively transferred into SAP or sent from SAP into a legacy system. In order to design test cases for

01_4782

2/5/07

10:29 AM

Page 16

16

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

RICEWF objects, technical specifications that can contain pseudocode will need to be developed. The development test cases need to reflect the testable conditions from the technical specifications. Business Warehouse (BW) testing is also part of the development tests. BW testing includes testing the infocubes, queries, reports, and multicubes. The main types of tests for BW testing are: ■ ■ ■ ■ ■

Reconciliations. Are financial calculations rolling up correctly? Extracts. Is there a match between the number of extracted records and the number of received records? Performance. How fast can a query be performed, and does it conform to established performance SLAs? Security. Who is permitted to slice and dice the data in the Bex Analyzer? What are the established roles for generating queries? Data transfer rules. Is data transformed correctly for all fields from the source system to the target system?

Scenario Testing The equivalent of a string test, scenario testing is the testing of chains of SAP transactions that make up a process within a single area or module. Scenario testing includes testing of a process with data from external systems and applicable SAP roles/profiles. Scenario testing is primarily a manual effort but can include some partial automation with test tools for processes that are stable, frozen, and proven to have worked manually. The scenario testing is owned by the configuration teams but includes participation from SMEs and members of the test team and development team. Integration Testing Integration testing is the testing of chains of SAP transactions that make up an end-to-end process that cuts across multiple modules, for instance, order-to-cash, purchase-to-pay, and hire-to-retire with external data and converted data. Integration testing includes testing through the external systems and SAP bolt-ons with security roles and workflow. Integration testing consists of multiple iterations. The ded-

01_4782

2/5/07

10:29 AM

Page 17

Introduction

17

icated test team is the owner of the integration test. Integration testing requires participation from members of the configuration and development teams for defect resolution. Additionally, SMEs and end users participate in the integration test as reviewers and for approval of test results. Integration testing is mostly a manual effort but can include some partial automation with test tools.

Performance Testing Performance testing encompasses load, volume, and stress testing to determine system bottlenecks and degradation points. A performance test helps to determine the optimal system settings to meet and fulfill the established SLAs. The dedicated test team is the owner of the performance test. Performance tests are conducted primarily with automated test tools. In theory it is possible to conduct performance testing with manual test cases, but this proves highly impractical since it is not easily repeatable and requires both human and hardware resources that are often not available. A performance test, even if automated, can still include manual execution of interfaces, batch jobs, and external processes that send data into SAP. The basis, database, and infrastructure teams help monitor the performance test, whereas the configuration team helps to identify test data and document test cases that are suitable for the performance test.

User Acceptance Testing User acceptance testing allows the system’s end users to independently execute test cases from the perspective of how the end users plan to perform tasks in the production environment. The owners of the user acceptance testing are the end users, and the configuration and test team members resolve defects identified during the user acceptance test. The test team and change management team members help train end users and prepare them for the user acceptance test.

01_4782

2/5/07

10:29 AM

Page 18

18

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Regression Testing Regression testing ensures that previously working system functionality is not adversely affected by the introduction of new system changes. System changes targeted for the production environment need to be analyzed for impact and cascading effects on other processes. Since SAP R/3 is an integrated system, a single system change—whether it is a hotpack, an OSS note, or a transport to resolve a defect—can have far-reaching consequences for other processes, and thus regression testing is needed to ensure that “nothing is broken” as a result of a new system change. Regression testing is primarily an automated testing effort. For regression testing, a library of automated test cases is constructed and played back to ensure that system transports do not break or alter system functionality. The test team owns the execution of the regression test. Determining the impact of a system change is primarily the responsibility of the integration team and change control board (CCB). Other types of SAP tests include usability, archiving, data migration testing, and technical tests. Usability testing is discussed in Appendix A. Technical tests such as backup and recovery, printing, faxing, electronic data interchange (EDI), availability, and so on are also needed in particular for initial SAP implementations and/or global SAP rollouts. The concept of technical testing is beyond the scope of this book. Data migration testing for established SAP implementation refers to SAP projects that have global SAP rollouts or multiple business units and want to introduce SAP to other company divisions or business segments. For example, a company may have designed the orderto-cash business process within SAP for one division and may have plans to extend the same or slightly modified version of the order-tocash business process to a different division that has different data values, and thus the new data values need to be tested. Depending on contractual, scope of the project, project’s oversight, or industry regulations, the SAP tests described above may need to be either very formal and structured or casual. The tests described will at a minimum require identification of valid test data, rewriting of test cases, or creation of new test cases; manual testing; peer reviews; and approvals at the end of each testing cycle.

02_4782

2/5/07

10:32 AM

Page 19

CHAPTER

2

Status Quo Review of Existing Testing Practices n order to establish a successful SAP testing program it is important to determine what strengths and weaknesses the existing testing approach offers. The status quo for many SAP projects is outdated testing documentation, underutilized automated test tools, overlooked lessons learned, and testing resources who are fillers or standins until a formal and dedicated test team is established. Reviewing the status quo requires elimination of flawed testing practices and the furthering of successful practices. Naturally, this is easier to discern when lessons learned are captured. However, even in the absence of lessons learned, it is possible to dissect and detect the program’s testing strengths and weaknesses by reviewing the project’s methodologies, reviewing the reported defects, holding testing seminars, or hiring third-party organizations that specialize in software testing.

I

HOW ARE YOU TESTING? Project managers at most SAP installations have the perception that their projects follow structured testing approaches, that their project members “know how to test,” and that they successfully completed testing for previous system releases. While this notion is prevalent at many SAP installations, most SAP installations cannot accurately answer the following 10 questions: 1. What is the number of fulfilled testable requirements for a previous testing cycle, or how does the system design trace back to all captured functional, development, and technical requirements? 19

02_4782

2/5/07

20

10:32 AM

Page 20

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

2. What is the number of test cases that need to be planned for the next testing cycle? 3. What is the expected cost associated with all testing activities? 4. What is the number of resources needed to test the system either for a major system upgrade or for an initial system installation? 5. What business processes are suitable for test automation based on predefined criteria? How are automated test cases reused from one testing cycle to the next? 6. How is the system configuration compliant with company policies, corporate business rules, and/or industry regulations? 7. How many defects remain outstanding from a system release, what is their priority, or how they will be resolved in a future system release? 8. What are the assessment criteria for evaluating proposed system changes and what objects are impacted that require regression testing as a result of the introduction of a system change? 9. What are the components of a test exit criterion or how is a test readiness review (TRR) conducted? 10. What testing metrics are captured for each system release and/or what testing documentation is retained at the end of each testing cycle to support company audits? This is only a partial list of the questions that most SAP installations struggle to answer when planning and executing their testing tasks. The status quo at most SAP installations is to interpret or define the intention of captured requirements, configure the system, develop advanced business application programming (ABAP) objects without code walk-through, rush or compress the testing schedule with project resources who are fillers or devoid of a testing background, transport objects into the production environment to meet project deadlines, and confront system defects through the production help desk. This approach of compressing the testing schedule and rushing transports into the production environment increases project costs since resolving and eliminating system defects after they have been introduced in the production environment is much more cost expensive than doing so in the earlier stages of the software life cycle. But the practice at many SAP installations is to place the burden on the SAP production support for resolving defects that were

02_4782

2/5/07

10:32 AM

Page 21

Status Quo Review Existing Testing Practices

21

missed, overlooked, or waived from a previous system release in the hopes of dealing with system defects at a later date. In a current era of strict compliance with government acts and industry regulations, many companies implementing or maintaining SAP installations must show how their SAP projects tie in with the project charter, scope statement, captured requirements, and the retention of testing artifacts such as test logs and test results, which requires a robust and comprehensive testing approach. In order to maximize the effectiveness of the project’s testing resources, it is necessary to review and analyze existing testing practices in order to address a wide range of situations that may impact testing cycles such as: ■



■ ■







Does the project have centralized or decentralized test teams? Is the test team or what passes for a test team composed of “fillers” or individuals from other project teams who are moonlighting as testers? How are process flow diagrams constructed to meet and enhance understanding of testable requirements? How are links and interdependencies among process flow diagrams captured, maintained, and depicted? Do process flow diagrams include swimlanes and expected SAP production roles for each swimlane? Can the project manually execute all planned test cases and test scenarios? Do test cases contain sufficient information and test conditions to verify and validate SAP profiles, segregation of duties, workflow, inbound data from legacy systems, reports, and company policies? Does the project have documented peer-reviewed and -approved test plans and test strategies? If so, how do project members adhere to such documentation? How are referenced documents for configuring the system such as business process procedures (BPPs), flow process diagrams, functional and technical specifications, and requirements managed and updated so that the system configuration settings and ABAP objects are in sync and in harmony with the project’s documentation? How are lessons learned from a previous testing cycle applied to future testing efforts? Or are lessons learned gathered and collected only to be gathering dust and never applied?

02_4782

2/5/07

10:32 AM

Page 22

22

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE



Do project resources show resistance to change when new testing initiatives or approaches are introduced?

The inability to address these situations can substantially compromise and undermine all testing tasks and deliverables. For SAP projects that can successfully address all the preceding situations, the next path forward is continuous process improvement for testing in the never-ending quest of streamlining the testing effort without compromising system quality and/or increasing project costs. However, practical experience would manifest that few companies can address all the aforementioned conditions and rather excel at meeting some of the conditions while completely ignoring the rest. SAP installations that recognize the gaps and disconnects associated with their testing methodology—even when their SAP has been in production support for a number of years—can reduce project costs, increase the chances of verifying testable requirements, and minimize end-user-logged defects about the production environment.

TEST TEAM STRUCTURE Reviewing the status quo for SAP testing requires the project manager and/or test manager (if one exists) to determine the makeup and composition of the testing team. In SAP projects the makeup of the testing team consists of centralized, decentralized, or “outsourced” testing teams. After the composition of the test team is determined, the next steps include deciding whether the current team makeup is suitable for the project’s testing needs or what it would take to shift the team structure. Centralized test teams are dedicated test teams whose main or only responsibilities are to maintain automated test tools, execute manual and automated test cases, report defects, design test cases, and enforce testing standards. Centralized test teams are under the control of and report directly to the appointed test manager and interface heavily with members from the configuration and development teams to document and design test cases, execute test cases, and resolve defects. They tend to bring consistency to the creation of testing deliverables since testers have a single reporting hierarchy, who is the test manager, and adhere to the testing standards implemented by

02_4782

2/5/07

10:32 AM

Page 23

Status Quo Review Existing Testing Practices

23

the test manager. Centralized test teams would conduct hands-on testing for regression, performance, and integration testing. They are most suitable for projects that meet the following criteria: ■ ■ ■ ■ ■ ■ ■



There is a large functional scope. Data is migrated from multiple legacy systems and/or SAP is heavily customized. They are subject to audits and industry standards. They have automated test tools. They can allocate the necessary budget for the test team. They have constructed a requirements traceability matrix (RTM). They have experienced a high volume of complaints and/or requests for changes about the system functionality and performance from the production end users. They are paying a system integrator to deliver SAP services and require some “independent” testing to verify and validate the deliverables from the system integrator.

Decentralized test teams are by far the most common structure at SAP implementations whereby members from the functional and development teams act as filler testers as required by the project-testing schedule. For example, under decentralized test teams, a functional SAP configuration team member or ABAP team member becomes a “tester” when he develops and executes test cases to test a functional process. Admittedly, initial testing efforts such as unit testing are usually conducted with members from the configuration and ABAP teams, but these individuals are not dedicated testers and have a proclivity to produce deliverables with varying degrees of quality. With decentralized test teams, different standards, methodologies, or templates may be used for testing SAP, which gives rise to multiple “test leads” based on the functional SAP area being tested. Accountability and ownership tend to fall through the cracks since no single individual is responsible for test results, maintaining automated test tools, ensuring that requirements have traceability with test cases, reporting testing metrics to measure testing progress, or documenting test plans and lessons learned. Decentralized test teams are suitable for projects with limited budgets that do not have automated test tools, when the SAP implementation is plain vanilla and implemented primarily out-of-the-box, or when the project has a

02_4782

2/5/07

24

10:32 AM

Page 24

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

limited functional scope. Decentralized test teams yield results that vary widely across the functional teams, and hinder the ability of the project manager to evaluate the exit criteria for each testing effort. These test teams create confusion over testing deliverables, roles, and responsibilities for testing the system, and produce results that are to leverage for future testing efforts during future system releases. Outsourced test teams are the latest industry hype in order to lower testing costs. With an outsourcing agreement a project would hand off testing activities to a third-party company that specialized in testing, typically located in a country with markedly lower labor costs. For example, an SAP installation may turn over all test automation tasks for developing, constructing, and executing automated test cases to a third-party company in the hopes of lowering the costs of performing the same automation tasks in-house. Outsourcing agreements also require at a minimum a project liaison to monitor, guide, and verify the deliverables that the outsourcing thirdparty company is performing. The outsourcing team is a subset of a centralized test team since in theory the outsourced testers were hired based on their testing expertise and ability to follow the same approach, templates, methodology, and practices for producing deliverables, which adds consistency. They also report to a single manager. Exhibit 2.1 highlights the drawbacks and benefits of different test teams. In addition to determining the makeup of the test team, it is also important to determine the makeup of the configuration test teams. The test team needs to be a reflection or mirror image of the makeup of the functional and development teams. For instance, if the SAP project has a “hire-to-retire” functional team, then the test team needs to have a tester assigned to test the cases for the “hire-to-retire” team, which include negative, security, workflow, migrated data, and user exit testable conditions. SAP implementations structure their functional teams either to emulate an end-to-end production business process (i.e., hire-to-retire, order-to-cash, purchase-to-pay, etc.) or to emulate standalone SAP modules (i.e., Human Resources module, Sales & Distribution module, Materials Management module, etc). Setting up functional teams according to end-to-end processes is more effective than a team structure that emulates standalone SAP modules since it takes into account the SAP integration points (“touch

Perceived greater control over testing artifacts Perception that testing goes “faster” Greater flexibility to test without interference from a third party (group)

Redundancy Inconsistency Testing rushed to meet deadlines Potential conflict of interest Limited managerial visibility Lack of standards Limited test coverage Missing testing metrics, results

Benefits

Drawbacks

More resources, higher payroll Cultural shift Can slow the process to enforce QA

Uniformity and consistency (i.e., same templates) Increased testing expertise Test tool knowledge, industry certifications (i.e., CSTE, Mercury, etc.) Can build a center of excellence Independent point of view Facilitate go/no go decision Increased test coverage

QA and testing teams are established for defining and enforcing the standards and procedures for the various testing cycles and documenting the test plan, test results, etc. QA is geared toward prevention. Testing is geared toward detection.

10:32 AM

EXHIBIT 2.1 Benefits and Drawbacks of Different Test Team Compositions

Each team (RICE, Functional, Security) conducts its own testing without coordination and standards

Concept

Centralized

2/5/07

Decentralized

02_4782 Page 25

25

02_4782

2/5/07

26

10:32 AM

Page 26

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

points”) as well as the variations (whether data or process driven) for each end-to-end process. The review of the existing test team structure is the first step in deciding how future testing cycles will be conducted. In order to change the test team structure, the project manager will need to review existing project deadlines, budget constraints, the project’s testing methodology and test plans, and the project’s charter. However, before making any decisions about revamping the structure of the test team, it is necessary to review any documented lessons learned from previous testing cycles and how those lessons learned have been applied.

REVIEW LESSONS LEARNED One of the most critical components of a testing program is documenting lessons learned and implementing corrective actions to address any deficiencies identified in the documentation. Recognized industry methodologies and models such as the Capability Maturity Model (CMM) and the Project Management Institute (PMI) place a premium on leveraging off lessons learned. Lessons learned, however, are rarely documented because most projects do not have the bandwidth or discipline to document “what went wrong.” This is a fallacy, as most companies have a proclivity to repeat the same mistakes in future testing cycles. Lessons learned are needed for the following shortcomings that plague many SAP testing programs: ■ ■

■ ■

■ ■

Inability or difficulty in collecting and reporting testing metrics. Making little or no use of automated test tools and test management tools even when a significant financial investment has been made in these tools. Not allowing sufficient time to conduct a performance test or have trial runs for a performance test. Having a shared test environment that is subject to frequent changes and transports that can adversely affect the test execution of manual test cases or the automation of test cases. Obsolete and outdated documentation for BPPs, flow process diagrams, and functional specifications. Missing peer reviews and signoffs for test cases.

02_4782

2/5/07

10:32 AM

Page 27

Status Quo Review Existing Testing Practices ■ ■ ■ ■ ■ ■



27

Missing test execution calendar. Not managing testable requirements or creating an RTM. Not documenting workarounds for defects that do not have a resolution prior to cut-over or go-live. Not having predefined exit criteria for each testing cycle. Not implementing or testing a solution that traces to the intent of the original scope statement. Poorly tested SAP roles/profiles or having documented test cases that do not consider SAP roles for the SAP transactions that need to be conducted. Ignoring negative testing conditions.

After every testing cycle the test manager is expected to document all lessons learned for continuous process improvement. Lessons from testing typically describe what areas of testing require improvement or were not performed in accordance with approved standards. Lessons learned are needed because SAP projects usually need to compress the testing schedule, which causes testers to cut corners in order to meet deadlines, or because the test manager encountered situations during testing that were not planned for in the test plan and that caused the test manager to make hasty decisions during the testing cycle, which did not yield the expected results. In the event that the SAP project does not have a dedicated test manager, an organization implementing SAP can hire a third-party organization to document “what went wrong” after the fact. These third-party organizations analyze test results and apply software lifecycle techniques to discern why testing was not conducted as expected or why a large number of defects slipped into the production environment. The main reasons for bringing in a third-party organization to document testing lessons learned are manifested as follows: when the original SAP system integrator is dismissed in favor of a new system integrator, and when the SAP production support team is expected to resolve all defects that were missed during previous testing cycles, which causes the workload for the production team to surge in the first six months following a major SAP system upgrade or SAP initial go-live. Documenting lessons learned can help programs meet future audits for test results, reduce rework costs, increase the stability of the production environment, and provide a template for planning the

02_4782

2/5/07

10:32 AM

Page 28

28

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

next test cycle. Captured lessons, whether documented by the test manager or a third-party company, must be reviewed at the end of the testing cycle, which may cause modifications to test plans, test strategies, automation approach, design and construction of future test cases, and the approach for resolving and closing defects. For large or global organizations implementing SAP in multiple sites with potentially different system integrators, it is important to communicate the lessons learned company-wide and to have them posted in a common repository such as a company-sponsored intranet. A company’s methodology, whether created in-house or adopted from another body, needs to be aligned with the need to capture lessons learned.

EXISTING METHODOLOGY When reviewing the status quo, companies implementing SAP need to assess what software methodology or approach guides the work products and deliverables of the SAP resources, including the SAP testing team. Large SAP system integrators such as Deloitte Consulting and IBM offer methodologies and implementation guides such as ThreadManager and Ascendant™ for either upgrading or initial installations of SAP. SAP itself offers the SAP Roadmap methodology embedded within the Solution Manager platform. Recognized bodies such as IEEE, SEI, and the U.S. Department of Defense (DoD) 5000 series directives for life-cycle management and acquisition, to name a few, also provide software methodologies for implementing an ERP/Customer Relationship Management (CRM) solution such as SAP R/3. Corporations that are missing a recognized methodology for implementing SAP can rely on software approaches that conform to the waterfall, spiral, and evolutionary models. These models offer different approaches for implementing software that include prototyping, dealing with programs that have a large scope, or unstable requirements. Depending on the size of the corporation implementing SAP, it is possible that the corporation already has other large software initiatives and a successful life cycle for doing so that can be leveraged for implementing SAP.

02_4782

2/5/07

10:32 AM

Page 29

Status Quo Review Existing Testing Practices

29

A successful software methodology, whether created in-house or adopted from another body, needs to have templates, accelerators, and white papers for testing ERP applications. Methodologies specifically designed for building software from scratch or from the ground up may not be suitable for implementing an out-of-the-box solution such as SAP and thus not offer any relevant guidance for testing SAP. Methodologies can differ in style, nomenclature, or deliverables, but for testing purposes, guidelines need to be clearly identified for the testing tasks that need to be performed. For instance, in the DoD the acquisition 5000 series governs by law what the government requires for a user acceptance test (UAT), which can differ drastically from the recommendations and guidelines from other SAP methodologies for conducting UATs. In the aforementioned example, methodologies can differ in what is necessary to conduct a UAT, but what is important is that the methodologies address the need for UAT testing and provide some guidance for conducting the UAT. The project and test manager must provide special attention to the project’s methodologies and how existing testing activities and tasks conform and align to the project’s methodologies. If no formal methodology exists within the project, then efforts must be taken to ensure that the testing approach and test plans are adequate for the project to help fulfill testing exit criteria, comply with testing audits, document lessons learned, and ensure that the system successfully traces its design to the in-scope requirements.

MANAGING TESTING CHANGES An appointed or chartered committee such as a change control board or project management operations (PMO) office has the ability to formally congregate project stakeholders who are authorized to introduce project or system changes affecting project resources, project schedule, project quality, or project scope. But finding a similar committee for introducing testing changes at SAP projects can prove inconceivable. SAP or other ERP projects need a testing committee consisting of members such as solution architects, integration manager, test manager, configuration and development managers, release managers, and project manager who are capable and authorized to

02_4782

2/5/07

30

10:32 AM

Page 30

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

introduce changes to the test plan, test strategy, or test approach. The test manager can chair the testing committee and members can meet with a specified frequency (i.e., monthly) in order to review testing changes for acceptance, rejection, introduction, implementation, and enforcement across all project teams. The concept of a testing steering committee or chartered testing group is often ignored at many SAP projects, which causes the testing life cycle to remain static, inefficient, or useless. One of the most pressing and frequent challenges that a testing program at an SAP project experiences is how to bring or introduce change to an organization for testing artifacts, deliverables, and standards. Most SAP projects lack an appointed or chartered committee for introducing, implementing, and enforcing new testing changes to either the testing approach or testing standards. A test manager who does not have authority over functional (configuration) and development (technical) resources who are test fillers or appointed to testing tasks may encounter resistance from various team leaders or project managers who have direct reports impacted by the introduction of testing changes. A test manager may propose an ostensibly “minor” change to the testing life cycle or testing methodology only to find that the configuration team leaders or configuration manager will reject or resist the proposed testing change. It is often the case that the test manager cannot align the testing objectives and consequences with the individuals in charge of managing the resources appointed to testing tasks. Under these circumstances, the test manager would need to convince a project manager or several team leaders (who may not be authorized to make decisions) in a series of ongoing meetings that a test change is necessary, of the rationale for the testing change, and of the benefits of introducing the testing change. An approach whereby a test manager must convince several project entities and stakeholders that a testing change is needed is typically time consuming, and likely to delay the implementation of the testing change or cause the testing change to get scrapped due to the project members’ inability to reach an agreement on a testing change. During the implementation or support of an SAP system it will be necessary for the test manager to propose changes at a minimum to the following artifacts or work products testing methodology, test plans, automation strategy, testing templates, testing presentations,

02_4782

2/5/07

10:32 AM

Page 31

Status Quo Review Existing Testing Practices

31

the approach for defect resolution, and test execution. Introducing changes to a test artifact, however, has the potential to affect project members responsible or assigned to testing tasks, the project scope, training of resources, project schedule, and project costs. Examples of testing changes that a test manager may propose to the project and their consequences to the project include the following: ■









Customization changes to a test management tool (i.e., adding new fields, screens, or validation logic to the management tool), which can trigger the creation of new custom training materials for the changes introduced to the test management tool, and the training of project resources who may have different learning curves for becoming familiar with the changes to the test management tool. Applying lessons learned from a previous testing cycle to a future testing cycle can over the long run increase quality or improve the testing methodology, but may cause a cultural shift within the testing resources who resist the adoption of a new test approach derived from lessons learned. Conducting requirements-based testing as opposed to executing test cases that do not map to a requirement can cause the configuration team members or test team members to allocate time from their schedules toward constructing an RTM and updating all test cases to map to a valid requirement. Peer-reviewing test cases. Peer reviews are recognized in the CMM. Peer reviewing helps to refine the test cases and ensure that the documented test conditions are valid. However, introducing the concept of peer reviews would require at a minimum a form to collect feedback from the peer-review session and assignment of project resources. The consequences of peer reviews can include conflict among testing resources, and rework, and require training since a peer review may cause one testing member to critique and evaluate the work from another test team member. Introducing a test readiness review (TRR) checklist. A TRR brings a discipline approach to assessing the preparedness for testing prior to the start of a testing cycle. A TRR is a checklist of items that must be met or have workarounds before the test execution phase is commenced. Items not met for a TRR may indi-

02_4782

2/5/07

32





10:32 AM

Page 32

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

cate that the project is not ready to begin execution for the testing cycle or that project members have not fulfilled all their responsibilities associated with a testing cycle. Under a TRR, different individuals are assigned items from a checklist that they must address in meetings prior to the start of testing. A TRR would require different project resources to provide statuses for their assigned tasks under a large audience, which can expose project members to political considerations and greater transparency for their assigned tasks. Enhancing flow process diagrams with narratives. It is often the case that a flow process diagram representing either an entire endto-end scenario or a portion of an end-to-end scenario is outdated or does not include a narrative. A narrative describes the actors, preconditions, postconditions, description of the process, and assumptions made that are associated with a flow process diagram. In UML (Unified Modeling Language) notation, use-cases are constructed with narratives that describe the modeled process. Narratives for diagrams are particularly useful for projects that experience high levels of turnover or have complex (highly customized) business processes. Enhancing a flow process diagram with narratives would require the author of the diagram to describe different attributes of the modeled diagram, which can cause the diagram’s author to shift attention from other tasks in order to document the diagram’s narratives. Automating testing processes. Automation is a useful technique for providing greater testing coverage, in particular for regression testing to support new project releases, system transports, or system upgrade. However, initial automation efforts are time consuming, require robust documentation of test cases and conditions to be validated, and most likely would require functional support from subject matter experts and SAP configuration members. Projects are often reluctant to provide SAP configuration members for extended periods of time or any period of time to construct automated test cases or update documentation such as test cases or business process procedures.

The aforementioned examples are situations that have training, political, rework, and labor hours implications on project members assigned to testing tasks. Adopting these changes or similar testing

02_4782

2/5/07

10:32 AM

Page 33

Status Quo Review Existing Testing Practices

33

changes within an SAP project has the potential to end in futility as decisions on testing changes move at a slow pace if at all. Project managers must formally establish a test engineering committee with participation from project members empowered and authorized to accept or reject testing changes to avoid the inertia encountered with the introduction of testing changes.

02_4782

2/5/07

10:32 AM

Page 34

03_4782

2/5/07

10:37 AM

Page 35

CHAPTER

3

Requirements otal quality management (TQM) proponent Crosby defined quality as “conformance to requirement.” At the time it was coined, Crosby’s definition applied primarily to statistical process control and manufacturing processes, but his definition is also applicable to software projects whereby software requirements are mapped to test cases and ultimately to the designed software solution to be deployed into the production environment. Our definition of quality is “a system that performs as expected, making available the required system features in a consistent fashion, demonstrating high availability under any type of constraint (i.e., stress, concurrency, security breach, etc.); thus, consistently meeting the user’s expectations and satisfying the system’s user needs can be considered to be high quality.” Whose responsibility is it to build high-quality systems? All stakeholders that take part in the software development process are responsible for the system quality and need to have tasks assigned accordingly to be able to contribute to the quality of a system. How is this type of quality achieved? Simply put, by documenting the system’s user requirements and needs. However, achieving this type of quality is more complicated than that. For example, many SAP projects do not have a stringent and effective methodology for drafting, capturing, managing, and verifying requirements, nor the schedules and budgets to implement this effectively. It can also be counterproductive to document every single requirement detail, something that is just not feasible unless you are working in an environment with no deadlines. Even simple concepts such as the basic requirements traceability matrix (RTM1) and requirements-based testing are often obscure or

T

1

An RTM links all requirements to the SAP implementation and to test cases to allow for measuring completeness.

35

03_4782

2/5/07

36

10:37 AM

Page 36

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

esoteric at most SAP implementations. So can be efforts to prioritize requirements. When working on a project where all requirements are considered high priority, it is not possible to implement or test based on requirement risk. The inability to successfully manage and prioritize requirements based on risk or link requirements to SAP components or test cases due to unrealistic deadlines and other internal project pressures and political ramifications can lead to deployment of an SAP system that severely lacks quality (i.e., it does not meet the client’s needs or goals, or is in violation of the company’s rules). Companies that do not verify all requirements cannot answer the simple yet critical question: “Have we built the system correctly?” Recent industry surveys also demonstrate that enterprise resource planning (ERP) systems (e.g., SAP) were judged unsuccessful. Many projects that cannot answer “have we built the system completely and correctly based on prioritized, traceable, and testable requirements” engage in the inexact art of hunches and wild guesses to determine and assess whether the delivered SAP solution or functionality can be deployed into a production environment on time and within budget, which is the equivalent of sticking out a thumb in the air to determine the wind velocity. While the quest for perfect requirements may never be reached, with the help of requirements management tools, risk analysis, peer reviews, RTMs, and stakeholder involvement, companies implementing and upgrading SAP projects can design robust solutions capable of meeting end-user (client) expectations.

REQUIREMENTS BACKGROUND According to Wikipedia, the free encyclopedia: By definition a requirement is a description of what a system should do. A requirement is a singular documented need of what a particular product or service should be or do. In the classical engineering approach, sets of requirements are used as inputs into the design stages of product development.

A requirement specifies what the system will do but not how it will do it.

03_4782

2/5/07

10:37 AM

Requirements

Page 37

37

In SAP projects, requirements are often morphed into terms such as business processes, business scenarios, business models and diagrams, business process master list (BPML) of transaction codes, functional specifications, and so on. Requirements allow the SAP project team to design, configure, and develop a solution that meets the end users’ expectations as well as the entities’ business rules and procedures. Requirements can help drive the project’s scope, personnel resources, and budget. In ERP packages such as SAP, functional requirements fall into the category of the tasks that the system’s intended end users must perform in order to complete their job functions. An SAP end user may perform tasks such as creating invoices, sales orders, deliveries, requisitions, outline agreements, purchase orders, general ledger entries, and performance appraisals in order to complete his job functions. These tasks are considered the “what” that the SAP system must meet. These job tasks and functions that make up “what” the system should do for end users must be captured as requirements in some form along with their associated company’s rules and policies. Functional requirements are only a subset of the universe of requirements for an SAP system. In addition to functional requirements, SAP systems will have requirements for the following: ■ ■ ■ ■ ■ ■

System performance, which can lead to service-level agreements (SLAs) Security Development objects (i.e., reports, interfaces, conversions, enhancements, forms, workflow) Usability Industry regulations Other

For instance, as part of the implementation of an SAP system, the security components related to segregation of duties, roles, profiles, and security authorizations may need to be captured and verified as testable requirements. In SAP, requirements can come from various sources for either an initial or existing implementation. For an existing implementation, requirements can come from one or more of the following events: (1) a previous system release where some of the requirements were deferred, (2) help desk tickets where end users report problems or new

03_4782

2/5/07

38

10:37 AM

Page 38

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

features that the system needs, (3) gap analysis and site surveys whereby end users cannot effectively perform their tasks, (4) adding a new system module, (5) implementing a new industry-specific solution, (6) adding a new company division, and (7) initially misunderstood requirements. All these events could cause the SAP project team to capture requirements to implement new system features, enhancements, modifications, or fixes. However, an initial SAP implementation will need to capture requirements from workshop participants, derive requirements from “as-is” documentation detailing the functionality for legacy systems, ensure that requirements are compliant with industry regulations (i.e., Federal Drug Administration, Federal Energy Regulatory Commission, Sarbanes-Oxley Act, Department of Transportation, etc.), client’s feedback for expected system functionality, end-user surveys, scope statement, and so on. Gathering initial requirements requires participation from several stakeholders and is often facilitated through the use of workshops. Once requirements are captured, they can be moved into a requirements management tool and subjected to inspections. Requirements can be managed with the procedures and policies from the Change Control Board (CCB). In SAP jargon, requirements are often thought of as business scenarios, process steps, security roles, performance expectations, business rules, and so on. Interpreting the intent of requirements is often the source of confusion and friction during the testing phases. However, these problems can be mitigated with requirements inspections, guidelines, and standards for drafting the requirements. For example, Karl Wiegers provides the following guidelines: “Avoid using intrinsically subjective and ambiguous words when you write requirements. Terms like minimize, maximize, optimize, rapid, user-friendly, easy, simple, often, normal, usual, large, intuitive, robust, state-of-the-art, improved, efficient, and flexible are particularly dangerous. Avoid ‘and/or’ and ‘etc.’ like the plague. Requirements that include the word ‘support’ are not verifiable; define just what the software must do to ‘support’ something. It’s fine to include ‘TBD’ (to be determined) markers in your SRS to indicate current uncertainties, but make sure you resolve them before proceeding with design and construction.”2

2

Karl Wiegers, Software Requirements, Microsoft Press, 1999.

03_4782

2/5/07

10:37 AM

Page 39

Requirements

39

A requirement is not a mere flow process diagram, an SAP transaction code, a picture of an interface, a survey response, a broad generic statement on a scope document, or some verbose unwieldy documentation in a text editor that cannot be verified. A requirement focuses on what the system and user tasks are and validates them with test cases. There are several types of requirements that fall under the hierarchy of business requirement that evolve into a user requirement, and further to a functional requirement. Requirements can help determine or refine a project’s scope, which subsequently determines personnel needs and budget. The next section provides some techniques for collecting, inspecting, and managing requirements.

METHODS FOR GATHERING REQUIREMENTS In an SAP implementation there are various methods for collecting and gathering requirements. Some of the methods and techniques that lead to the capturing of new SAP requirements include workshops during the blueprint phase where customer input (CI) templates are populated, user surveys, company’s business rules, use cases, government regulations, deferred requirements from a previous release, help desk tickets, prototypes, gap analysis, “as-is” documentation from legacy systems, interviews with subject matter experts (SMEs), and scope statement. In order to successfully design, configure, and implement an outof-the-box or prepackaged solution such as an ERP system, it is necessary to capture, manage, and verify requirements. Requirements can be captured for various categories such as functionality, security, performance, usability, development objects, and workflow. Captured requirements need to be evaluated, and requirements changes need to be managed in order to avoid scope creep. Most initial SAP implementations will capture requirements during the workshops held in the blueprint phases, whereas existing SAP implementations may collect requirements from site surveys, gap analysis, end-user feedback, addition of a module, and so on. Independent of the method for capturing requirements, the objective and goal of the test team and project manager is to ensure that the system conforms to all documented and in-scope requirements. For companies either implementing or upgrading SAP with the implementation guide Solution Manager or IBM’s Ascendant™ guide,

03_4782

2/5/07

40

10:37 AM

Page 40

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

a methodology is presented to allow end users, SMEs, and business analysts to voice their system expectations, expected system tasks, and requirements. Solution Manager offers CI templates that contain fields and sections whereby requirements can be identified, documented, and organized under three categories: (1) organizational units, (2) master data, and (3) business processes. CI templates can be modified as needed to meet the project’s needs. Furthermore, companies can identify guidelines and standards for populating, storing, version control, and applying statuses for a CI template. An example of modifying a CI template could entail the inclusion of fields that capture how frequently a business process is executed, its expected business volume, and error handling conditions to compensate for end-user mistakes and priority. Examples of guidelines and standards for populating a CI template include the identification of which fields are mandatory, the stakeholders who must review the contents of the CI template, ensuring that the CI template does not have any fields that are left blank, criteria for considering when a CI template is “completed,” and so on. The CI templates can contain fields that allow the functional analyst to capture the following information for a given SAP business process: ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■

Security authorizations Estimated number of users Description of the business process Ownership of the process A high-level bulleted list of the requirements and expectations for the process Development objects considerations (e.g., reports, interfaces, conversions, enhancements) Rationale for a requirement Associated business process model Master data Dependencies Impact to existing organization

After the CI template for business process is chosen from the drop-down list, the user is provided with a form that constitutes the CI template to populate information for the business process.

03_4782

2/5/07

10:37 AM

Requirements

Page 41

41

Well-documented CI templates can lead to the identification of SAP touch points, which are areas of integration for SAP scenarios, and also lead to the identification of business scenarios. A scenario is a subset of requirements. A scenario may provide coverage for one or more requirements. For instance, the order-to-cash scenario can integrate functionality from four or more SAP modules with data from interfaces containing extended components such as remote function calls (RFCs), multiple SAP roles, and trigger workflow, and ultimately transfer data based on business logic from one SAP transaction code to the next. In this example the order-to-cash scenario may provide coverage for functional, security, workflow, and development requirements. With the help of a CI template, the functional and business analysts can determine the touch points for a scenario such as order-to-cash. Scenario variations identified from CI templates can be driven by the input data or the process. For instance, the order-to-cash scenarios may verify that SAP transaction “VA01” has been documented with multiple order types. The IBM tool Ascendant provides an example of a scenario, “Sale from Stock Using Scheduling Agreement,” which has six different variations (cases or scenarios) based on factors such as type of sales order, source of stock, stock pulled from inventory or configured from scratch, and exceptions. The following example for commercial orders was drawn from IBM’s Ascendant methodology: 1. 2. 3. 4. 5.

Commercial order: standard order flow with no exceptions Commercial order: ship from stock Commercial order: configure to order International order: standard order flow with no exceptions International order: ship from stock one half from Mexico, one half from Canada 6. Intercompany order: standard order flow with no exceptions, and so on The requirements identified from the CI templates can lead to the creation of scenarios such as order-to-cash and sale from stock using scheduling agreement that have different variations. Completed CI templates can also help the functional and business analysts develop functional and technical specifications, security roles, performance

03_4782

2/5/07

42

10:37 AM

Page 42

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

targets for system response times, SLAs, data elements, and development objects that determine the project’s scope. Consequently, CI templates can lead to the creation of user requirements that can be verified through the execution of test cases. Workshops held during the SAP blueprint phase provide a forum for populating the CI templates. Populating the CI templates helps to establish, confirm, or refine system requirements. The CI templates provide information as to what the system will do once it is deployed. Workshops are sessions whereby project members affected by the design of the SAP solution participate in seminars with scribes to identify what the SAP system will do for them. For existing or deployed SAP production-based systems, gap analysis, change requests, or enduser tickets logged through the production help desk may provide opportunities to gather and collect requirements, which can be further analyzed in workshops. The workshop participants can consist of SMEs, business analysts, a scribe, configuration experts, test team members, developers, and end users. Each participant plays a vital role in ensuring that the requirement is captured, peer reviewed, inspected, and finalized during the requirements gathering stage, which is part of the SAP phase recognized as blueprint. For example, for an initial SAP implementation during an initial workshop, the SAP functional consultant may describe the capabilities of an SAP module and the subject matter expert and end user can describe how the legacy system works in relation to the described SAP capabilities or what SAP functionality needs to be extended in order to meet needed functionality, and this information can be captured in the CI templates that reside within the SAP Solution Manager platform. The implementation SAP Roadmap methodology embedded within the Solution Manager platform provides hints and procedures for conducting a workshop. In particular, Section 2.4 from SAP Roadmap focuses on the mechanics for conducting a workshop to collect SAP requirements. Exhibit 3.1 shows specific steps from the SAP Roadmap methodology for drafting requirements during the blueprint phase. During a workshop, expected SAP functionality and requirements can be gathered and collected for a given SAP process within a single module, enterprise area, or end-to-end process. Typically, the SAP workshop gives the SAP configuration expert (who is considered

03_4782

2/5/07

10:37 AM

Page 43

43

Requirements

EXHIBIT 3.1 Activities to Support Gathering of Requirements from SAP’s Roadmap Methodology Phase

Blueprint

Activities

• • • • • • •

2.4.2 General Requirements Workshops 2.4.3 Business Process Workshops 2.4.4 Gap Analysis 2.5.2 Development Requirements Review 2.7 SAP Feasibility Check 2.8 Authorization Requirements and Design 2.8.2 User Roles and Authorization Design

Deliverables

• • • •

Functional Design Specs Flow Process Diagrams Technical Design Specs Customer Input (CI) Templates

Tools

• • • •

Identify Criteria for Evaluating Requirements Create a Requirements Traceability Matrix (RTM) Requirements Management Tool Throwaway Prototypes

the workshop facilitator) the opportunity to describe to the workshop participants the capabilities of the SAP system. As the workshop progresses, the facilitator probes the participants for information to determine their business needs, business functions, and expected tasks and incorporates the information within the body of the CI template. A workshop facilitator can rely on a dedicated scribe to capture all feedback and comments from the workshop participants. Furthermore, the workshop scribe can document issues, concerns, or questions that the workshop participants raise in order to resolve them at a later time. Depending on the size and complexity of the SAP implementation, the project may need multiple iterations (rounds) of workshops in order to complete and finalize the CI templates. For processes and requirements that are captured and are subject to interpretation and ambiguity it might be necessary to illustrate the requirements with process flow diagrams containing swim lanes. The workshop facilitators and participants need to ensure that the captured requirements within the CI templates are consistent with their company’s policies, business rules, and industry regulations. For instance, industries such

03_4782

2/5/07

10:37 AM

Page 44

44

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

as pharmaceutical, airline, and utilities may have their business rules and logic governed by regulations from the Food and Drug Administration (FDA), Department of Transportation (DOT), and Federal Energy Regulatory Commission (FERC). In another example of considering acts and policies when capturing system requirements, federal agencies and Department of Defense units within the Unites States have financial requirements that are governed by the Federal Financial Management Improvement Act of 1996 (FFMIA) and Joint Financial Management Improvement Program (JFMIP) and thus the implemented SAP solution must be FFMIA and JFMIP compliant. After requirements are documented, they should be inspected, peer reviewed, and subjected to a disciplined approval process. Exhibit 3.2 shows some of the roles associated with gathering and managing requirements. Well-documented and enforced criteria factors and standards can improve the quality of the documented requirements. Industry experts also recommend that the development of test cases be drafted in parallel with the documentation of requirements as a means of improving the quality of the requirement. As requireRoles for Requirements • Manage changes to requirements • Freeze (“lock-down”) requirements • Communicate requirement changes

CCB

SAP Expert

• Lead/manage workshops • Fill out CI templates • Construct functional and technical specs • Diagram requirements • Author’s requirements

Drafting and managing requirements is an integrated team effort • Help construct Testers requirements traceability matrix (RTM) • Peer-review requirements • Develop test cases to verify requirements

SMEs

• Peer-review requirements • Ensure requirements align with company’s policies and business rules • Sign off on requirements • Participate in requirements • Ensure requirements fulfill “as-is” system functionality

EXHIBIT 3.2 Roles for Managing and Collecting SAP Requirements

03_4782

2/5/07

10:37 AM

Requirements

Page 45

45

ments are captured within a workshop, the functional test team member can develop test cases to verify the requirement before it is coded or configured within SAP. Illustrating the requirements with flow process diagrams or demonstrating them with a throwaway prototype can further enhance the quality of the requirement and reduce requirement ambiguity. Under the SAP Roadmap methodology, opportunities are provided to refine, clarify, and address issues and gaps within requirements. One of these opportunities is known as the SAP feasibility check (Section 2.7 from Roadmap). The SAP feasibility check brings experienced SAP resources to the project to perform services such as evaluation of documented business processes and risks, determination of expected business volumes, and risk assessment. According to SAP Roadmap, the following activities are major outputs of the SAP feasibility check: ■

■ ■

“Mapping of the core business processes with SAP standard functionality and planned developments that identifies: ● Major gaps and modification requirements ● Critical functions and critical integration requirements Check any functional risk of the planned solution, including business processes and gap analysis Determination of expected business volume and number of users across the different components in the solution landscape ● Check of sizing and performance ● Assessment of availability requirements and management demands”

The SAP feasibility check concludes with written reports and presentations from the SAP experts. An SAP feasibility check may reveal that the requirements cannot be implemented, need to be modified, or can be implemented. For example, the experts conducting the SAP feasibility check may show that a company’s plan to have 10,000 company codes in a production environment may deteriorate the system response times. In the absence of CI templates or Solution Manager, another requirements elicitation technique is user questions or interviews. This approach presents a questionnaire to legacy and production users that allows further decomposition of the project’s scope statement.

03_4782

2/5/07

10:37 AM

Page 46

46

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Most projects develop scope statements at a high level that need further decomposition in order for the SAP functional and development teams to discern what needs to be configured or coded. The owner of the questionnaire can provide instructions, deadlines, and guidelines for filling out the questionnaire. The elicitation can be documented within a text editor or spreadsheet. The elicitation queries end users on their expected production tasks. Exhibit 3.3 is a sample questionnaire for the configuration of the SAP human resources (HR) module for a global SAP rollout to multiple countries. Responses for the questionnaire in Exhibit 3.3 can help the SAP HR functional expert define or refine the project’s scope statement, EXHIBIT 3.3 Questionnaire to Collect Requirements for the SAP HR Module Questions: Do you use SAP for HR? If not, what do you use? Do you use SAP for Payroll? If not, what do you use? Who enters time at your location? (Specify: 1. Timekeepers, 2. Employees, 3. Administrators, 4. Contractors, 5. All of them, or 6. Other) Define the people who are responsible for time entry and timesheet approval. What system do you use to enter time? Describe your current process for entering time (include diagrams if necessary). What locations and sites do you enter time for? What Payroll areas do you enter time for? What type of identification number is required in your timesheets (Social Security number, employee number, etc.)? How many users are expected to enter timesheets at your location? Describe any target or source systems where the timesheet data is received from or sent to. List any reports that are necessary to review timesheet entries.

Responses:

03_4782

2/5/07

10:37 AM

Requirements

Page 47

47

develop business processes, draft high-level requirements, and consequently configure the system. After the questionnaire is completely filled out and turned in, the functional expert can illustrate the questionnaire responses with diagrams and/or system prototypes to confirm understanding with the responses from the end users. The same principle of peer reviewing and inspecting the requirements that originated from CI templates applies to the requirements (responses) obtained from the sample end-user questionnaire. SAP projects can develop similar questionnaires for other SAP modules. Another technique for capturing SAP requirements includes the creation of Unified Modeling Language (UML) use cases. Use cases allow analysts to capture what the system does, but not how it does it. The use case approach focuses on the end user’s tasks and system actions. Use cases can be used to derive functional requirements. For instance, for a Web site bookseller a use case may show that an Internet shopper (the actor) accesses the Web site to buy books (the use case). Use cases are accompanied with narratives that provide attributes and information for the use cases, such as priority, frequency, description, preconditions, postconditions, primary actors, error handling, and variations. A use case that is missing a narrative is deficient and incomplete. Use cases can be verified with test cases. UML notation and diagrams such as use cases can be drawn with software from vendors such as Rational Rose, Altova UModel, or Smart Draw, to name a few. The software ARIS™ from IDS Scheer, which integrates with SAP’s Solution Manager, offers the capability to design UML diagrams and SAP business process diagrams. Intellicorp offers the LiveModel™ solution, which is a graphical repository for documenting SAP business processes and also comes with a prebuild 4.7 or 5.0 SAP reference model preloaded. Exhibit 3.4 shows an example of a use case for ordering CDs from a Web site. For existing production-based SAP implementations, requirements can come from a gap analysis, requirements deferred from a previous release, or help desk–reported errors. Examples include a large corporation that has rolled out SAP to some of its divisions or to a specific geographical area and the SAP project team now needs to include new regions or divisions and must take into account requirements, business processes, and business rules from a different set of users. Gap analysis is the term used to identify what new functionality will be included or modified from an existing release to meet

03_4782

2/5/07

10:37 AM

Page 48

48

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Distribution System Send CD to Customer

Distribution System

Internet Sales System <>

Notify that CD is sent to Customer when UPS/Mail

Orders pulled out of DB Place information on Web

Customer Notification Credit Card Verification

<>

CD Database C.R.U.D.

<> Order Checkout CD

Look for CD

<>

Marketing Database C.R.U.D.

EM

Review Marketing Material e-mails

Run Weekly Report

Customer

Browse by Category <>

Search by Specific CD

E-mail Marketing Material Vendors

EXHIBIT 3.4 Use Case for Ordering CDs from a Web site the requirements for a new set of users. User surveys or questionnaires such as the one shown in Exhibit 3.3 can assist in documenting the information gathered from a gap analysis. An existing SAP project can also expect to inherit new requirements from a previous system release. For instance, an SAP project that is subject to multiple go-lives and releases may have had some requirements deferred from one system go-live to the next due to budget or schedule considerations. The deferred requirements now need to be included in the system design. For projects that have allowed much time to elapse between SAP releases, it may be necessary to reevaluate all previously deferred requirements for consistency, necessity, and completeness. In an existing production environment the SAP production team and help desk may get calls from end users reporting deficiencies or

03_4782

2/5/07

10:37 AM

Page 49

Requirements

49

software problems. Depending on the nature of the reported problem, its solution may require the application of an OSS (On-line Service System) note, a new graphical user interface (GUI) upgrade, a patch, a configuration change, or a new system enhancement. These reported problems create opportunities to add new or improve existing system functionality and can represent a feature that was not addressed by previously gathered requirements or constitutes a new requirement. Complex enhancements or system changes may cause various areas of the system to be reconfigured, design of new development objects, and the addition of new security roles. For system requests originating from the help desk, it is highly recommended that a CCB is in place to evaluate the merits, costs, efforts, and rationale associated with implementing the request to avoid scope creep or gold plating. The methods, techniques, and approaches described in the next section can help SAP functional experts determine what processes are in scope for the next SAP go-live or cutover. The aforementioned requirement-gathering techniques can lead to the creation of a BPML that identifies which SAP transactions are in scope, the associated SAP roles/profiles for accessing the SAP transaction codes, functional and technical specifications, the identification of end-to-end scenarios including integration areas (touch points), and the associated reports, interfaces, conversions, and enhancement (RICE) objects. Collecting and drafting requirements is important, but managing them is also arguably as important.

TOOLS FOR MANAGING REQUIREMENTS While gathering and collecting requirements is important in order to design a system that meets the end user’s expectations, another critical element that needs to be considered is how the requirements will be managed, tracked, monitored, and changed after they have been captured. In an SAP implementation a requirement may undergo many statuses during its life cycle. For instance, after a proposed requirement has been approved, it may enter other states, including changed, deleted, deferred, and rejected. A requirement may also be verified and implemented. Additionally, the typical SAP implementation will have requirements that fall under various categories such as: (1) functionality, (2) performance, (3) security, (4) workflow, (5) development objects, and (6) usability.

03_4782

2/5/07

50

10:37 AM

Page 50

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Managing the status of requirements in disconnected spreadsheets, text editors without version control, audit trails, and restricted access from remote areas is a logistical nightmare that can compromise the quality of the requirement and subsequently the system’s design. Fortunately, many commercial vendors offer solutions for managing requirements within a single repository that includes the necessary security features, version control, audit trails, history logs, and approvals for changing the requirements. Other benefits of using requirement management tools include the ability to link requirements to test cases and third-party integration with companies that make test management tools. For instance, Rational’s Requisite Pro integrates with Mercury Interactive’s Quality Center (TestDirector). Other software solutions for build requirement management include DOORS, Serena-RTM, and Caliber-RM. The full promise of a requirement management tool is not unleashed until a process is defined for managing the requirements that takes into account evaluating the impact of a change to a requirement, communicating requirement changes, identifying criteria for prioritizing a requirement, identifying the owner of the requirement, requirement inspection, and the establishment of a CCB. The promise of a requirement management tool is manifested when the project stakeholders understand the documented procedures and formal process for changing a requirement and recognize that the requirement management tool is the only official repository for all project requirements. After requirements have been captured and stored in a requirement management tool, they can be evaluated based on predefined criteria.

EVALUATING REQUIREMENTS In her book, Effective Software Systems: 50 Ways to Improve your Testing,3 Elfriede Dustin reveals several criteria and provides a checklist to help peer reviewers and authors of requirements to verify and

3 Dustin, Elfriede, Effective Software Systems: 50 Ways to Improve your Testing, Addison-Wesley Professional, December 2002.

03_4782

2/5/07

10:37 AM

Page 51

Requirements

51

measure the quality of a requirement. Dustin measures requirements based on the following nine attributes: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Correctness Completeness Consistency Testability Feasibility Necessity Prioritization Unambiguousness Traceability

These attributes are essential for evaluating the quality of captured SAP requirements that subsequently become business processes, scenarios, transaction codes, interfaces, reports, conversions, security roles, workflow logic, and so on. Requirements are evaluated through inspections and peer reviews. In an inspection, various stakeholders meet and subject the requirement to the criteria. Requirements not meeting the criteria attributes are either modified or deleted. The following descriptions are provided for the various attributes that help evaluate the quality of a requirement: 1. Correctness. Ensures that the requirement does not conflict with the company’s business rules, policies, standards, regulations or other previously approved requirements. This attribute ensures that the user’s voice (expectation) for the system functionality or expected system task is captured correctly during the requirementsgathering phase. 2. Completeness. All requirement information and elements should be included. A requirement cannot have any missing elements. Since many SAP functional requirements focus on what the user does or the tasks that the user performs, the likelihood of overlooking a requirement is minimized. 3. Consistency. Requirements that are consistent do not conflict with each other. Since SAP is an integrated system with integrated areas (touch points), one has to ensure that the requirements are in harmony with one another.

03_4782

2/5/07

52

10:37 AM

Page 52

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

4. Testability. A requirement needs to be verified with one of the methods for software verification (e.g., inspection, demonstration, test, or analysis). If a requirement cannot be verified with any verification method including the execution of test cases, then it falls short of the definition of a requirement. 5. Feasibility. This criterion ensures that the requirement can be implemented and tested given the project’s resources, technology, budget, and time frame. Functional, security, workflow, and development SAP resources can work with the workshop participants to determine which requirements can be built or customized in the SAP system with user exits, ABAP development, or system configuration. 6. Necessity. A requirement needs to add value and have merits for the project. The requirements must address the needs and expectations of the system stakeholders and end users. Question all requirements, asking “What would happen if this requirement did not exist?” If the answer is “Nothing would happen,” that is a manifestation that the requirement is not needed or adding any value. Requirements that cannot be traced to any origin also reveal that they are probably not needed or out of scope for the existing implementation. 7. Prioritization. While all requirements are important, some are more so than others. Giving priorities to requirements allows the CCB and project manager to respond to requirement changes based on events such as descoping, unexpected compressed testing time frames, or budget cuts. Ordinal scales that rank requirements on a sliding scale (i.e., 1 to 5) are helpful for prioritizing the importance of a requirement. Alternatively, one can also prioritize a requirement low, medium, or high. Requirements that are crucial to running the business and have no workarounds are the most important ones, whereas requirements that have workarounds or do not bring down the business if not verified can be considered of medium importance or “nice to have.” For instance, security requirements are extremely important since their implementation helps companies comply with SOX regulations. In addition, functional requirements for making payroll runs, creating sales orders, and making a materials requirement planning (MRP) run are also extremely important because they allow an entity to operate and if they are poorly implemented the company would suffer business

03_4782

2/5/07

10:37 AM

Page 53

Requirements

53

disruptions. The following criteria from IBM’s Ascendant help to rank and prioritize requirements: ● Frequency (How often does the process occur?) ● Impact (What is the effect if the process is down?) ● Difficulty (What is the probability that problems occur?) 8. Unambiguousness. Requirements that can be interpreted differently by a different set of users are too broad, not decomposed enough, or unambiguous. Often, during user acceptance testing (UAT), the SAP end user will say, “This is not what the system is supposed to do” after a test case is executed, and the configuration expert will quip, “But that’s how I interpreted your requirement/business scenario.” This situation is a sign that the requirements were not thoroughly inspected. Requirements need to be stated precisely without leaving room for doubt or confusion. Requirements cannot be subjective, meaning person X interprets the requirement one way and person Y interprets the requirement differently. For example, ambiguous statements such as “Make the system as fast as possible for response times,” “Provide error messages for invalid input from end user,” and “Provide financial reports for month-end closing activities” can be interpreted in multiple ways, which will cause many design changes and testing defects. In the case of the performance requirement, one can build a system that has average response times of 10 seconds per SAP transaction code screen or response times of two minutes per screen. 9. Traceability. A requirement needs to refer back to the source that originated it. For instance, captured requirements in a requirement management tool need to trace to their origin, which can be the workshops, surveys, help desk tickets, end-user questionnaires, CI templates, use cases, flow process diagram, prototype, and so on. Traceability tells the project where the requirement came from, which helps to justify the existence of the requirement.

BUILDING A REQUIREMENTS TRACEABILITY MATRIX After requirements are drafted, entered into the requirements management tool, and peer reviewed against corporate policies, business

03_4782

2/5/07

54

10:37 AM

Page 54

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

rules, and against project scope document, one can proceed to create an RTM. The RTM is a representation of user requirements aligned against system functionality. The RTM is used to ensure that all requirements (not just functional requirements) are being met by system deliverables. The RTM will map requirements to test cases. The benefits of the RTM include: ■ ■ ■ ■

Bridging requirements to functional and technical design. Addressing the verification question: “Are we building the system right?” Ensuring full requirements coverage—links all requirements to design, development, and test cases. Keeping track of changes to requirements for retesting (when a requirement is changed it needs to be retested).

Without an RTM, it is difficult to determine whether the proposed solution fulfills all end-user requirements. This causes project managers to make uneducated and subjective guesses as to whether the system can be deployed into production based on the verification of the entire set of requirements. An RTM can be constructed with the help of a requirements management tool where requirements are stored within a single repository. Constructing RTMs in text editors or spreadsheets is a pointless and fruitless exercise, as requirements changes cannot be effectively communicated or managed, which causes chaos, design flaws, and scope creep. A traceability matrix is created by associating requirements with the test cases or scenarios that verify them. Each requirement including parent and child requirement should have a unique identifier. Test cases or scenarios are associated with the requirements that they represent. Test scenarios represent the model of the system to be built. A requirement may be verified with one or more test cases. However, a complex, lengthy requirement that needs to be verified with multiple test cases is usually a sign that the requirement has not been decomposed sufficiently. The construction of an RTM may require input and feedback from several stakeholders, although one team such as the test team can take ownership of the RTM.

03_4782

2/5/07

10:37 AM

Page 55

Requirements

55

CHANGE CONTROL BOARD A CCB is instrumental in enforcing change control policies for requirements. CCBs can enforce policies for the following potential situations: ■



■ ■



The end user creates a help desk ticket for what is perceived to be faulty system functionality, and the CCB determines that the help desk ticket is, in fact, a system enhancement feature not in scope or captured in the original functional requirements and that implementing the feature would cause scope creep. During the UAT phase after test cases are executed, the end user reports that the system is “missing” functionality, and the CCB determines that the functionality is “missing” because it has been deferred to a future release and thus it will not be added during the existing UAT phase. The project experiences budget cuts, and the CCB determines which requirements have to be deferred, modified, or rejected. After prototypes are shown, the end user requests system changes that cause new “proposed” requirements to be considered for the project scope, and the CCB determines the impact, resources, time frames, costs, and benefits of adding new requirements to the system. After system changes are made and tested, the CCB ensures that all procedures, project documentation, and approvals are adhered to before the object (system change) is transported into a production environment. The CCB also ensures that objects are transported in the correct order and sequence to the target environment.

The CCB has the responsibility of communicating system changes to the project and ensuring that the requirement management tool is the official repository for all project requirements. The CCB determines the priority of a system change and which system changes need to be transported on an emergency basis to avoid system failure or disruption to the business. Members of a CCB can include functional team leaders, integration manager, project manager, test team manager, Basis leader, security and development team leaders, and SMEs. The CCB plays a vital

03_4782

2/5/07

56

10:37 AM

Page 56

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

role in the life cycle of the requirements after they have been captured, baselined, and approved. Without a CCB, the requirements may be modified or deleted without appropriate communication and impact analysis. This may cause project delays, budget overruns, and system failure.

INDEPENDENT VERIFICATION OF REQUIREMENTS Independent verification allows companies requesting SAP services from system integrators to inspect from a neutral point of view testable requirements before the system is designed and configured, and also to verify that the system fulfills the testable requirements before the design or solution is deployed into a live production environment. Ideally, independent testing efforts meet the definition of Institute of Electrical and Electronics Engineers (IEEE) standards and are performed by individuals who do not have a vested interest in the success or failure of the SAP implementation. According to IEEE standards,4 the definition for independent verification and validation (IV&V) is as follows: Independent verification and validation (IV&V) is performed by an individual or organization that is technically, managerially, and financially independent of the development organization.

Given limited budgets, resources, and stringent deadlines that accompany most SAP implementations and SAP upgrades, it may prove difficult to establish IV&V activities at an SAP project per established IEEE guidelines. SAP projects may have quasi-independent testers that meet one or two of the independence criteria as established by IEEE for IV&V but not all three criteria factors. For example, a client may pay a third-party agency not affiliated with the SAP system integrator or client to independently verify testable requirements and test results that could meet the criteria of technical and managerial independence per IEEE, but the third-party agency may gets its funding from the project management operations (PMO) office or project

4

729/610.12, Glossary of Software Engineering Terminology, New York, 1990.

03_4782

2/5/07

10:37 AM

Requirements

Page 57

57

manager, and thus would fail to meet the criterion of financially independent. Despite the inability of most SAP projects to fully satisfy independent verification and validation per IEEE standards, the concept of “independent” testing needs to be introduced to SAP projects whereby a system integrator is paid to implement SAP and thus reduce the notion of a conflict of interest between the system integrator and the client requesting SAP services. A conflict of interest arises when a system integrator is paid to deliver SAP services and also placed in a position of finding defects for its delivered SAP design and functionality, which could hamper the system integrator’s ability to meet deadlines and collect incentive bonuses from the client. The trend and pattern for implementing or upgrading SAP is that that the company seeking to implement SAP will spend millions of dollars for an implementation partner to analyze, design, construct, test, implement, deploy the solution, and provide end-user training. While this approach and model for implementing SAP has been emulated at thousands of SAP projects, it raises the question of independence. In other words, how does the client company know that the implementation partner has properly tested the solution? Is it possible that the implementation partner has a conflict of interest in getting the SAP solution to be deployed as quickly as possible to meet project deadlines and bonuses that causes testing activities related to test case design, test case execution, and test defect reporting to take a backseat? Even if the implementation partner does not have ulterior motives for improperly testing the system and verifying the testable requirements, can one reasonably assume that the implementation partner does not have a robust testing methodology, knowledge of automated test tools, and the available resources or expert resources to test and verify all captured requirements? After pondering all these questions and considering compliance with regulations such as Sarbanes-Oxley (SOX) Section 404, and agencies such as the FDA, FERC, and the Securities and Exchange Commission (SEC), one reaches the inevitable conclusion that having the company that designs and installs the system also test the system is the equivalent of “putting the fox in the chicken coop.” To mitigate the risk of having an implementation partner that does not verify all requirements or has a conflict of interest, the following recommendations are made:

03_4782

2/5/07

10:37 AM

Page 58

58

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE



Construct and build an RTM with a tool specifically designed for an RTM such as Serena RTM, which is preferred over constructing an RTM in disconnected spreadsheets. Hire a firm that provides independent testing services and is not under the authority or control of the implementation partner. The independent firm should verify all requirements, not just the functional requirements. Requirements related to performance, security, disaster recovery, usability, and development objects should be verified. The independent firm has the right to challenge and question the implementation partner’s design of the solution and configuration of the system based on the drafted requirements. Conduct a thorough user acceptance test with qualified candidates representing the end users and SMEs. UAT should probe that the solution works from the point of view of the end users. Merely having system prototypes or presentations is not sufficient to demonstrate to the end users that the solution meets their expectations. UAT is more valuable when end users can execute actual hands-on test cases and report defects where applicable. Implement exit criteria and certification processes at the end of each testing phase. Avoid the use of waivers and exceptions that the implementation partner typically proposes to the client for unfulfilled or unverified requirements in order to promote the system into production.





■ ■

An RTM provides a unique requirement identification for each requirement and ensures that each requirement is mapped to a test case. Reports from an RTM can demonstrate which requirements are “orphans” because they have not been tested or verified with a test case. For requirements that are mapped to test cases the client should seek and request evidence that the test case has actually been executed with an audit trail and printouts of screen shots where appropriate. Every time a requirement has the status of “verified” the client company can ask to see which test case was executed and its results to verify the requirement. Implementation partners vary in expertise, breadth, and number of resources when implementing a solution such as SAP. An implementation partner that excels in developing SAP interfaces may not have expertise in developing test plans, exit/entrance criteria, creating automated test cases, designing test cases, developing testing stan-

03_4782

2/5/07

10:37 AM

Requirements

Page 59

59

dards, gathering and managing requirements, and so on. Out of political considerations, some implementation partners hide defects identified during the testing phases from the client to minimize client concerns over the system design. Bringing in a firm that has no allegiance to the implementation partner can mitigate the risk of promoting into production a system with defects. Independent firms are not paid by the implementation partner and thus the implementation partner is unlikely to exert any political pressure on them. An independent firm that specializes in SAP testing will report system defects when the requirements are not verified through the execution of test cases that can cause delays to the go-live date, but will also reduce the impact of having an unstable production system and shifts the burden onto the implementation partner to build the system correctly based on the approved requirements. A comprehensive UAT—one that follows the integration-testing phase—allows the end users to interact with the system before it is deployed. A good approach for UAT is to have a preselected list of test scenarios previously executed during integration testing reexecuted during UAT with specially trained members from the end-user community. The Roadmap methodology implies that the integration test should have participants known as “extended users,” who can be end users. However, that in and of itself does not create an independent test. UAT should be a dedicated testing effort where the end users have the opportunity to report errors and defects with the application and challenge the validity of “verified” requirements. UAT participants should perform their testing based on their designated SAP roles (i.e., inventory clerk) as opposed to testing the system with SAP_ALL access. Problems, errors, and defects unresolved from UAT should be taken into account before making a go/no go decision to deploy the application. Exit criteria also allow the project to put a safeguard in place to ensure that requirements have been verified. Exit criteria define the conditions under which a testing effort may come to an end before moving on to another testing effort or project phase. Exit criteria can specify that no defects exist or that all requirements have been verified before the testing ends. Finally, avoid the use of “waivers” and “exceptions” for in-scope requirements that were not fulfilled. Implementation partners that cannot fully design the solution rely heavily on client waivers as a

03_4782

2/5/07

60

10:37 AM

Page 60

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

means of expediting the project tasks and dealing with problems “later on” or to “have the production team” fix the problems. Waivers do have merit when the implementation partner has a situation outside of its control such as waiting for a patch from the vendor. However, when used haphazardly for waiving requirements, they increase the risk and instability of the production system. All waivers need to be accompanied with mitigation or workaround strategy, expected resolution date, and ownership so that they do not fall off the edge of the earth. The CCB can evaluate how a waiver to a requirement can impact other requirements.

04_4782

2/5/07

10:38 AM

Page 61

CHAPTER

4

Estimating Testing Costs esting is an activity that typically consumes the most resources and budget in any SAP implementation or SAP upgrade. When broken down into smaller subtasks, testing is expensive because it requires labor hours from multiple project resources for test planning, test automation, test execution, recording test results, resolving defects, impact analysis, retesting the application, and applying lessons learned from testing. For example, the creation, peer review, and approval of a single SAP test case may require labor hours from the business analyst, SAP consultant, system architect, quality assurance (QA) representative, subject matter expert (SME), and test team member. The costs associated with SAP testing are primarily attributed to the following activities and/or events:

T

■ ■ ■

■ ■ ■

Hardware equipment (i.e., machines, servers, laptops, printers, etc.) Software costs (i.e., automation, version control, and test management software) Billable hours for testing activities such as test planning, test design, test execution and defect resolution (i.e., hourly rate for contractors, employee’s labor costs) Outsourcing agreements (i.e., paying a third-party entity to conduct independent or automated testing) Training costs (i.e., learning test procedures for reporting defects) Person-hours spent enforcing quality standards and lessons learned

Accurate estimation of SAP testing costs for a given SAP test cycle (i.e., regression, performance, integration, etc.) is complex because few companies implementing SAP maintain historical records for time actually spent planning and executing test cases for previous test cycles, and because testing is typically viewed as an activity that is 61

04_4782

2/5/07

62

10:38 AM

Page 62

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

subject to truncation or compression when the project experiences budget cuts or falls behind schedule. In order to properly plan and estimate the costs for testing within an SAP environment it is necessary to recognize all factors and conditions that contribute to testing costs, such as historical data from previous testing cycles, expert estimates for estimating duration times for test activities, the use and maintenance of automated test tools, the thoroughness needed for documenting test cases and test results, and the level of expertise for the assigned SAP testers. After the testing costs have been estimated and planned, the project manager and test manager can spread the testing costs across the project’s schedule to produce a budget and validate the budget against project constraints.

CHALLENGES IN COST ESTIMATING The key factors that help to dissemble the true testing costs are as follows: ■ ■



Basing cost estimates on poorly defined testing activities and work packages Not considering the cost of hardware, human, and software resources needed to conduct testing (i.e., automated test tools, test management tools, test labs, contractor’s billable rates for conducting a stress test, etc.) Tying testing costs to an ill-developed project schedule (missing dependencies, erroneous activity duration times, missing activities, etc.)

One of the factors contributing the most to miscalculation of testing costs at most SAP implementations is that many activities related to test execution and test planning are not always clearly identified in a project schedule. For instance, the following activities and events are related to testing but may not be clearly tracked or identified as such in an SAP implementation: ■ ■

Purchasing automated test tools and test management tools Maintaining test tools and test management tools

04_4782

2/5/07

10:38 AM

Page 63

Estimating Testing Costs ■

■ ■ ■ ■ ■ ■ ■ ■



■ ■ ■

■ ■

63

Prototyping testing concepts (i.e., prototype of an automated test case for a project that has recently purchased an automated test tool) Applying lessons learned from a previous testing cycle Ongoing and year-round regression tests to support system upgrades, OSS notes, system enhancements, hot packs, etc. Resolving end-user tickets reported to the production help desk Reconfiguring and retesting the system because the original requirements were not captured or interpreted correctly Training resources on test procedures and test standards Enforcing testing standards Time spent retesting identified defects Time spent developing automated test cases, which can require support from subject matter experts, business analysis, configuration experts Time allocated for a performance/stress test that can require assistance and support from the Basis team, DBAs (database administrators), infrastructure team, business analysts, SAP functional expert, advanced business application programming (ABAP) developers, etc. Time and resources allocated to establishing the test environment Time spent executing manual test cases Costs to establish and maintain a test lab, which can include machines, printers, phones, separate local area network (LAN), etc. Costs of collecting, gathering, and managing test results Time spent maintaining and updating testing deliverables (i.e., test plans, test strategies, test cases, test scripts)

These activities and events are only a partial listing of what it would take to establish and maintain a testing program at an SAP implementation that can hide or disguise the “true” testing costs. The biggest obstacle in estimating testing costs, however, is that most projects do not have historical data to recycle from previous testing cycles or a breakdown of testing activities with sufficient granularity to determine the time spent on individual testing subtasks. SAP testing is time consuming and resource intensive for both initial and existing SAP implementations. For an initial SAP implementation, up to 50

04_4782

2/5/07

64

10:38 AM

Page 64

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

percent of all project resources and budget may be dedicated to supporting all testing activities, either directly or indirectly. For an existing SAP implementation with a large functional scope, one can expect to spend over 5,000 person-hours in planning test cases, executing test cases, recording test cases, logging test results, and resolving testing defects for a regression test. But exactly how much time is allocated to each testing activity and testing subtask based on previous testing cycles is an enigma at most projects since this information is not stored or maintained anywhere. For instance, the creation and execution of a test case to validate a new system change (i.e., creation of a custom SAP transaction with different screen and validation rules) to be transported into the production environment may require multiple activities and person-hours from various project resources, which may not be individually tracked or monitored in a project schedule. The potential testing activities associated with a new system change are as follows: ■

■ ■ ■ ■ ■ ■ ■ ■

Identification, modification, or creation of test requirement, business process procedure (BPP), and flow process diagram (resources: subject matter expert, business analyst, SAP consultants, end user, system architect, and test team member) Construction of test case with test steps, test conditions, and expected results (resources: SAP consultant, test team member) Rehearsing of documented test case to ensure that it was documented properly (resources: SAP consultant, test team member) Peer review and approval of test case (resources: SME, end user) Manual execution of test case (resources: test team member) Recording of test results (resources: test team member) Resolution of defects (resources: business analyst, SAP consultant) Retesting of change if defects were identified (resources: test team member) Automation of new process with automated test tool (resources: contractor from the test team who specializes in test tool, SAP consultants, SME)

All the aforementioned testing activities for promoting a system change into production may consume hundreds of person-hours from project resources, which translate into thousands of dollars, yet these testing costs might be overlooked because the test manager or project

04_4782

2/5/07

10:38 AM

Page 65

Estimating Testing Costs

65

management office does not document or track these individual activities in a project schedule. For larger testing efforts such as performance, integration, or regression testing, omitting or overlooking individual testing subtasks and work packages from a scheduling system would obscure the true cost of all testing activities. Overlooking testing subtasks is one of the many factors that hide the true cost of testing. The cost estimates used for test planning and test execution can also be impacted by other factors, such as the level of SAP expertise from the individual planning and executing test cases; the enforcement of QA standards for documenting test cases and test results, which can cause rework of testing deliverables; the stability of the system configuration, which may cause rework of automated test cases; and compliance with industry acts such as good manufacturing practices (GMPs) or Sarbanes-Oxley (SOX) that strictly govern the documentation for recording and validating test results. Furthermore, hardware and software resources needed to facilitate and enhance the planning and execution of test cases are costs that must be taken into consideration when estimating expected testing costs. To overcome the challenges of testing costs it is necessary to identify all activities, deliverables, and tasks related to testing, the labor hours allocated to each testing task, the rate per hour, and the costs of training, hardware, and software resources needed to support testing. The entire scope of testing is in fact far more encompassing than mere test execution of test cases, which is often viewed as the only activity that produces testing costs.

SCOPE OF COSTS The scope of testing costs includes all activities associated with requirements gathering, test planning, test design, test execution, test reporting, and defect resolution. Testing activities occur in multiple phases, which include unit, development (reports, interfaces, conversions, and enhancements), scenario, integration, performance, stress, and regression testing. To implement SAP from scratch or maintain an SAP system, it is necessary to conduct continuous testing cycles and commit hardware, software, and human resources in support of testing tasks. For an

04_4782

2/5/07

66

10:38 AM

Page 66

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

established or production-based SAP system, it is possible that a project may have resources assigned to tasks from the production-based team and the development team. For example, a new system enhancement will require unit, string, integration, and regression testing by the SAP development team before it is deployed into the production environment, and subsequently the new system enhancement will be tested by the production team after they assume ownership of the promoted system enhancement. After the SAP system has been promoted into production, the scope of testing costs include modifying test cases, authoring new test cases, identification of data sets, construction of a test environment, costs of testing software and hardware, and staff-hours needed to execute test cases, record test results, and resolve defects. In an SAP production-based mode, other testing costs may include costs from outsourcing agreements to test the application remotely or with automated test cases developed in offshore locations, and also the testing costs associated with the rework and redevelopment of previously implemented system functionality that was promoted but does not behave as expected. For production-based environments, the testing costs are also manifested in other qualifiers such as maintaining the testing software, maintaining and updating testing documentation, applying testing lessons learned, generating reports, and showing testing evidence to support testing audits. Given the inherent nature associated with patches, OSS notes, hot packs, system upgrades, and system enhancements that frequent an SAP production system, it is conceivable that multiple test types would need to be executed in order to support a production-based SAP installation. For initial SAP implementations, the extent and scope of testing costs include the establishment and enforcement of testing standards, staffing the testing team, acquiring and procuring necessary testing resources (hardware, software), executing test cases, recording test results, and resolving defects. Initial SAP implementations may have reduced or limited functional scope, which in turn may decrease the testing costs and the size of the testing team. However, as the SAP implementation increases in functionality, the number of modules, modifications, and SAP bolt-on components (i.e., business warehousing) after it has been promoted into the production environment, the size of the test team, and the overall testing costs will proportionally increase to take into account testing from the production and the de-

04_4782

2/5/07

10:38 AM

Page 67

Estimating Testing Costs

67

velopment team before changes are deployed into the production environment. The costs of testing for a production-based SAP implementation are primarily a by-product of the amount of regression testing that is needed before a new functionality is deployed into the production environment. Companies that want to reduce manual testing costs to move objects into the production environment are shifting to test automation or outsourcing agreements as a means of executing and playing back large, functional SAP scenarios unattended and within a short time span before promoting an object into production. On the other hand, companies relying solely on manual testing and recording of test results for moving objects and new functionality into the production environment have increased risk of not having all the necessary resources and bandwidth to test all impacted combinations of business scenarios and end-to-end functionality before an object is transported into the production environment, which increases the testing risk and testing costs, decreases the stability of the production environment, and creates rework.

TECHNIQUES FOR ESTIMATING COSTS Estimating testing costs can be facilitated with two models: (1) expert judgment or (2) historical information. With expert judgment, a project member who is experienced with testing activities can estimate the resources and hours needed to conduct a testing task such as time needed to design a test case that involves five transaction codes, or total time needed to execute an end-to-end scenario. A person with expert judgment estimating testing costs may rely on established industry guidelines and benchmarks, software methodologies, or own hands-on experience. For instance, IBM’s Ascendant SAP methodology shows that documenting test results for test cases with an average complexity level may take as long as 30 minutes per person. On the other hand, historical information provides a repository of information for the number of time units needed by each projectspecific resource to conduct a testing task. For instance, historical information may show that a given project member takes on average two business days to resolve a defect with a severity level of one, or may also show that test cases for a particular business area (i.e.,

04_4782

2/5/07

68

10:38 AM

Page 68

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

warehouse management) take on average five hours to execute per resource. Alternatively, for companies relying on earned value calculations, historical data may show that the testing costs for a previous testing phase required 5,000 person-hours at a total cost of $2 million, which can serve as a basis of estimate for planning testing costs. Both historical and expert judgment models may not take into account other testing costs that facilitate and support testing activities. The acquisition, purchasing, and maintenance costs of software test tools and hardware resources may not be captured correctly with both the historical and expert judgment models. This makes it imperative that, independent of the technique used for estimating testing costs, the cost estimating technique includes labor costs for hourly rates, hardware costs, software costs, number of labor hours needed, traveling costs for remote participants traveling to conduct testing tasks, and the costs from an outsourcing agreement if one exists. Aggregating and decomposing all testing costs allows the test manager to create a more accurate budget for all testing activities.

05_4782

2/5/07

10:41 AM

Page 69

CHAPTER

5

Functional Test Automation* unctional testing assures that your implementation of SAP meets your business requirements. Given the highly configurable and tightly integrated nature of the SAP modules, as well as the probability that you will also integrate in-house applications or third-party plug-ins, it is a critical and challenging task requiring the verification of hundreds or even thousands of business processes and the rules that govern them. This chapter explores the business case for automating your functional testing, the alternative automation approaches to consider, and organizational considerations and techniques for maintaining and managing your test automation assets.

F

WHY AUTOMATE? Test automation is not a panacea, but it can make a dramatic difference in the quality and stability of your SAP deployment over the long term. The key is to understand when automation works and when it does not, and how to assure your success.

Business Case for Automation There are three key benefits to automation: 1. Expand your test coverage. 2. Save time and resources. 3. Retain knowledge. *This chapter was authored by Linda Hayes, CTO of WorkSoft, Inc.

69

05_4782

2/5/07

10:41 AM

Page 70

70

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Expanding your test coverage is one of the most valuable benefits of automation because it translates into higher quality and thus less costs associated with downtime, errors, and rework. Over the life of your SAP deployment you will likely experience an increase in the number of business processes it supports, either through the implementation of additional modules or integration with other systems. As a result, each successive implementation or modification affects a greater number of business processes, which increases the risk and opportunity for failure. Even a 10 percent increase in total functionality still requires testing of 100 percent of the process inventory due to the risk of unexpected impact. The tightly integrated nature of SAP increases this risk. As Exhibit 5.1 shows, a manual test process cannot keep pace with this expanding workload because time and resources available for testing are either fixed or even declining. In this exhibit, the lighter arrow indicates the processes that need to be tested and the dark arrow indicates the number of test resources. This combination of increasing processes that need to be tested with a static number of testers leads to increased risk and potential cost of failure. Under the scenario represented in Exhibit 5.1, automation is the only practical answer. It enables one to capture tests as repeatable assets that can be executed for each successive release or deployment, so that the inventory of tests can keep pace with the inventory of business processes at risk. This repeatability saves time and resources as well. Instead of requiring repetitive manual effort to reverify processes each time changes are introduced, tests can be automatically executed in an unattended mode. This allows your resources to focus on adding new

# Business Processes

Risk

Test Resources Time

EXHIBIT 5.1 Test Workload Compared to Test Resources

05_4782

2/5/07

10:41 AM

Page 71

Functional Test Automation

71

tests to support new functionality instead of constantly repeating existing tests. Ironically, when test time is short, testers will often sacrifice regression testing in favor of testing new features. The irony is that the greatest risk to the user is in the existing features, not the new ones. If a business process that the enterprise depends on stops working— or worse, starts doing the wrong thing—then you could halt operations. The loss of a new feature may be inconvenient or even embarrassing, but it is unlikely to be devastating. This benefit will be lost if the automated tests are not designed to be maintainable as the application changes. If they either have to be rewritten or require significant modifications to be reused, you will keep starting over instead of building on prior efforts. Therefore, it is essential to adopt an approach to test library design that supports maintainability over the life of the application. Finally, the process of automating your test cases introduces discipline and formality to testing, which results in the capture of application knowledge in the form of test assets. You cannot automate what is not defined. By defining your business processes and rules as test cases, you are converting the experience of subject matter experts (SMEs) and business analysts (BAs) into an asset that can be preserved and reused over the long term, protecting you from the inevitable loss of expertise due to turnover.

When to Automate Conventional wisdom holds that you should automate your tests only for regression testing; that is, the initial deployment should be performed manually and only successive changes automated. This belief arises from the historical record/playback approach to test automation, which requires that the software be completed and stable before scripts can be captured. New approaches exist, however, that allow automated tests to be developed well in advance of configuration or code completion. These approaches are further described later in the Test Automation Approaches section. Using these new approaches, automated tests can serve a dual purpose: They can provide documentation of the “to be” business process as well as deliver test automation. This collapses two steps—

05_4782

2/5/07

10:41 AM

Page 72

72

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

documentation and automation—into one, thus further conserving time and resources. What to Automate Automation is all about predictability. If you cannot express the precise inputs and expected outputs, you cannot automate a test. This means that it should be used to verify what is known or predicted. Typically this means positive tests, as in assuring that the business process is executed successfully, but can also be applied to negative tests that verify if business or field edit rules are violated, such as invalid data types or out-of-range values in which the data is rejected and an error message given. Think of these tests as “making sure” that processes work as expected. In the context of SAP, the obvious automation candidates are the “to-be” processes, processes that are executed frequently, critical to the business, and contain integration points (touch points). For SAPbased production systems, SAP transaction ST03 allows for quick filtering of which SAP transaction codes are actually used in production and to what extent/volume. Further, for each process, the data variations that exercise business and edit rules can also be automated. Applying data-driven techniques to automation enables you to quickly expand your test cases by adding data. This also means, however, that automation is not appropriate for ad hoc, random, or destructive testing. These tests must be performed manually because by their very nature they introduce unexpected or intentionally random conditions. Think of these tests as covering “what-if” scenarios. Ad hoc tests are uniquely suited to manual testing because they require creativity and are deliberately unpredictable. By allowing automation to take care of what you expect to work, you can free your experts to try to break the system. Critical Success Factors Successful test automation requires: ■ ■

Management commitment Planning and training

05_4782

2/5/07

10:41 AM

Page 73

Functional Test Automation ■ ■ ■

73

Dedicated resources Controlled test environment Pilot project

No project can succeed without management commitment, and automation is no exception. In order for management to manage, they must know where things stand and what to expect. By letting management know up front what investment you need to succeed with automation, then keeping them informed every step of the way, you can get their commitment and keep it. This requires a comprehensive project plan. Your automation plan must clearly identify the total costs and benefits of the project up front, provide a detailed project plan with the required resources, timelines, and related activities, then track results and report to management regularly. Costs include selecting and licensing the right tool, training the team, establishing a test environment, developing your test library, and maintaining both the tests and the tool. The number and type of resources you need, the time required, and the specific activities will depend on the approach you adopt. If and when obstacles are encountered, let management know right away. Get bad news out as early as possible and good news out as soon as you can back it up. Nothing is more disconcerting for management than to invest resources without seeing progress or, worse, by sudden surprises. Also keep focus on the fact that the test automation project will last as long as SAP is being used and maintained. Every successive release, update, patch, or new integration will need to be tested and the automated test assets accordingly maintained and reexecuted. No matter how easy to use the tool is claimed to be, plan for training as well, and perhaps consulting. Learning a tool through trial and error is costly and time consuming, and it is better to get off on the right foot. Since it is easier to get money allocated all at once instead of piecemeal, be careful not to buy the software first and then decide later you need training or additional services. Although the promise of automation is exciting, realize that test tools do not work by themselves. Buying a test tool is like buying a treadmill—the only weight you lose is in your wallet! You must use the equipment, do the exercises, and sweat it out to get the benefits. Also understand that even though test automation saves time and resources in the long run, in the short term it will require more than

05_4782

2/5/07

10:41 AM

Page 74

74

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

manual testing. Make sure management understands this, or you may find yourself with a tool and no one to implement it. Not only must you have the right resources, you must also commit to a controlled test environment that supports predictable data. Automation is all about repeatability, and you cannot repeat the same tests if the data keeps changing. In most cases the data values are the key to the expected results. Identifying, creating, and maintaining the proper data is not a trivial problem to address and often represents more than half of the overall effort. Do not wait until you are ready to start testing to implement your strategy. The ideal test data strategy is to have a known data state that can be archived and refreshed for each test cycle. If this is not possible or practical, you may consider using automation to “seed” or condition the data to create or modify data to meet your needs. If this is your first automation effort, start with a small pilot project to test your project plan assumptions. Invest two to four weeks and a couple of resources in automating a representative subset of your business processes, and carefully document the effort and results during the pilot as these results can be used to estimate a larger implementation. Since you can be sure you do not know what you do not know, it is better to learn your lessons on a small scale. Also be sure to commit the right type of resources. As described in the following section on test automation approaches, depending on the approach you adopt you will need a mix of skills that may or may not be part of your existing test group. Do not imagine that having a tool means you can get by with less skill or knowledge: The truth is exactly the opposite.

Common Mistakes Pitfalls to avoid when automating your SAP testing include: ■ ■ ■ ■

Selecting the wrong tool. Using record and play techniques. Writing programs to test programs. Ignoring standards and conventions.

In order to select the right test tool you must perform an evaluation in your environment with your team. This is the only way to

05_4782

2/5/07

10:41 AM

Page 75

Functional Test Automation

75

assure that the tool is compatible with your SAP implementation— including any gap applications—and especially that your team has the right skill set to be productive with it. A scripting tool that requires programming skills, for example, will not be successful unless you have technical resources available on your team. For purposes of this evaluation, make sure you understand how the tool handles not only test development but also test management and especially maintenance, since these are critical to long-term productivity. Do not settle for a simplistic record-and-play script. Insist on understanding how to write robust tests that are well structured, documented, reliable, and easy to maintain. Record and play is a very attractive approach: Simply perform a manual process and have it automatically recorded into a script. But while these scripts are easy to record, they are unstable when executed and all but impossible to maintain. They do not have the documentation and structure to be readable, and they lack any logic to detect and respond to the inevitable errors and changes that will occur. Even variations in the response time of an SAP transaction can cause failures. Another drawback to recorded scripts is that they contain hardcoded data. Recording the process of creating a hundred invoices, for example, will yield a script containing the same steps 100 times over. This means if a configuration change is made to any step of the process, it must be made 100 times. Since this is impractical, recorded scripts are rarely reused after changes and must often be re-recorded. Thus, the value of automation is lost. While the issues with capture/playback can be resolved using advanced scripting code, this leads to the other extreme: writing programs to test programs. This technique requires programming skills, which may exclude your most experienced testers. Further, if each test case is converted to script code, you will have more code than the SAP module does. This approach results in custom code that is also difficult to maintain, especially by anyone other than the original author. Balancing the trade-offs between ease of use and coding is the subject of the discussion of test automation approaches in the next section. The last common mistake is to ignore the need for naming standards and test case conventions. If each tester is permitted to adopt their own approach and store their tests wherever they wish, it will

05_4782

2/5/07

10:41 AM

76

Page 76

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

be impossible to implement a centralized, unified test library where tests can be easily located and shared. Treat your automated tests as the asset they are and ensure that they are easily found, understood, and maintained.

TEST AUTOMATION APPROACHES Test automation has steadily evolved over the past two decades (longer if you count mainframes) from record and play, which is all code and no data, to code-free approaches that are all data and little or no code. This trend reflects the fact that code is more costly to develop and maintain than data. This evolution has followed these four stages: 1. 2. 3. 4.

Record and play Data-driven Frameworks Code-free automation

These represent varying combinations of code and data used to construct test cases and each has different advantages and drawbacks.

Record and Play Record and play appears to be easy but turns out to be difficult. Recorded scripts usually have a very short useful life because they are hard to read, unstable when executed, and almost impossible to maintain. The time that is saved during the initial development is more than offset by the downstream costs of debugging failed sessions or re-recording after changes. Exhibit 5.2 shows an example of a recorded script. The ideal use of record and play, oddly enough, is to capture the results of manually executed tests. This assists the tester in documenting results and perhaps reproducing the exact steps that led to any errors.

05_4782

2/5/07

10:41 AM

Page 77

Functional Test Automation

77

EXHIBIT 5.2 Example of Recorded Script Traditional test automation tools that cost thousands of dollars per user are overkill for this use. Instead, look for simple session recorders that are available for as low as $100. Data-Driven Data-driven techniques address the hard-coded data issue of record and play by removing the data from the scripts and instead reading it from an external file. Typically, a process is recorded once, then script code is added to substitute variables for literal data, read the variable values from a file, and loop until all records are completed. This approach reduces overall code volume and allows test cases to be added quickly as data records, but requires programming skills to implement. It also results in custom code for each process that must be maintained as the application changes. Exhibit 5.3 reflects the type of changes introduced into a recorded script in order to make it data-driven. Frameworks While data-driven techniques succeeded in reducing code volume attributable to hard-coded data, they did not directly address the

05_4782

2/5/07

78

10:41 AM

Page 78

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBIT 5.3 Example of Data-Driven Script and Data File inefficiencies of not sharing common code to handle common tasks across test cases. They also limited the analyst’s ability to design test flows consisting of multiple scenarios and data. In response, frameworks evolved as a way to provide an infrastructure to handle common tasks and allow business and quality analysts to write test flows by calling reusable code components. Typical elements of a framework include: ■ ■ ■

A layer that allows test flows to be constructed as data in a spreadsheet or database. Reusable or generated code components that execute testing tasks against SAP. An infrastructure that handles test packaging, timing synchronization, error handling, context recovery, result and error logging, and other common tasks.

Frameworks require two roles and skills: the test engineer, a programmer or scripter who develops the framework and reusable code components, and the test designer, a business or quality analyst who constructs processes by ordering these components, together with the test data values they require, into a spreadsheet or database.

05_4782

2/5/07

10:41 AM

Page 79

Functional Test Automation

79

A framework offers several advantages. Nontechnical testers can design automated test flows and provide data in a standard format, and test engineers can optimize their coding and maintenance effort by developing reusable components. The framework also takes care of managing and monitoring execution to provide reliable results. There are three basic types of frameworks: key/action word, screen/window, and class library. Each type can be implemented using text files (spreadsheets) or databases. Spreadsheets are more economical, as most users already have access to them and are familiar with their use, but they are more challenging to manage and maintain because they are not centrally stored or controlled. It is also easier to make typographical or spelling errors in a spreadsheet. A database, however, requires more cost and effort to implement but is easier to manage. By providing a graphical user interface (GUI) front end, users can select from drop-down lists and otherwise be protected from making input errors. Relational databases also enable more rapid maintenance as mass changes can be introduced using Structured Query Language (SQL) statements and similar functions. A key or action word framework comprises business functions that perform tasks against SAP such as entering an order or verifying an invoice. Each key or action word has a set of data values associated with it for either input or verification. Exhibit 5.4 illustrates a typical key/action word implementation using spreadsheets. Key/action word frameworks can be developed internally or acquired from commercial vendors. Some of the commercial tools generate the scripts for common components, then allow test engineers to add additional code to handle errors and decision making at runtime as well as other application-specific logic or functionality. The maintenance of key/action word frameworks is divided between the code and the data. The code may have to be regenerated or modified when the application behavior changes and the spreadsheet or database may have to be updated as functionality is enhanced or changed. Key/Action Word Framework

Screen/Window This type of framework is a variation of key/action word in that there are reusable code components that perform specific tasks, but in this case they are organized around actions such as data

05_4782

2/5/07

10:41 AM

Page 80

80

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBIT 5.4 Key/Action Word Implementation Using Spreadsheets Test Name:

Add Order

Description

Create orders and verify total and tax

Testcase

Customer

Product

Quantity

Price

Add Order Add Order Add Order

Acme Signs Baltimore Sun Crosstown, Inc.

Posterboard Paper Confetti

1000 65000 1250000

5 1.15 0.05

Testcase

Customer

Product

Tax

Total

Verify Order Verify Order Verify Order

Acme Signs Baltimore Sun Crosstown, Inc.

Posterboard Paper Confetti

400 5980 0

5400 80730 1000000

input or verification against each SAP screen. Exhibit 5.5 shows a screen/window implementation using a database and GUI interface. When a screen changes, the related screen action code components must be modified or regenerated as well as the related test case spreadsheet or database. Class Library A class library framework is built around code components that are mapped to SAP objects instead of tasks or screens. Each object class has an associated set of actions that can be performed against it—for example, input to a text box or pressing a button. These class/action code components may be developed or generated, with code added for decision-making logic based on the results during execution. Exhibit 5.6 is an example of a spreadsheet implementation for a class library framework. As with other framework types, these can be organized into test processes in spreadsheets or databases. In this case, the data is provided for each single action. Since the SAP class library rarely changes, the only code that requires maintenance for functional changes is any decision-making or other custom code that has been added. The rest of the maintenance occurs in the spreadsheets or database.

05_4782

2/5/07

10:41 AM

Page 81

Functional Test Automation

81

EXHIBIT 5.5 Screen/Window Implementation Using a Database and GUI Interface

Build versus Buy Any of these framework types can be internally developed or licensed from a commercial vendor. While building your own framework may sometimes appear to be less costly and provide the most flexibility and control, it requires an investment in the development and ongoing support and maintenance of the framework. Since robust frameworks consist of tens of thousands of lines of code, the

EXHIBIT 5.6 Spreadsheet Implementation for a Class Library Framework

05_4782

2/5/07

10:41 AM

Page 82

82

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

resource costs and time to create and support this code may be substantial. Further, if the original framework developers leave, it is common for the replacement engineer to rewrite or restructure the framework code according to their own style or preferences. This adds to the ongoing cost of ownership. Of course, buying a framework incurs a licensing fee, but this cost may be offset by reducing the continued support and maintenance costs to a fixed-price annual fee. The decision as to which option is more economical should also take into consideration how much custom code is needed in either scenario. If the commercial framework still needs significant code development to support the desired test work flows, it may not offer enough of a cost advantage to offset the license costs.

Code-Free Automation A new type of test automation solution has recently emerged that does not require any code to be developed at all. This approach includes vendor-supported reusable code components that are mapped to the SAP class library and allows test analysts to construct processes using point and click within a GUI interface. The tester selects the SAP screen, the object and the action to be performed from a series of drop-down lists, then provides the test data or variable name for any required values. The difference between the code-free approach and the previous frameworks is that no code is written or generated in order to automate the tests. All test processes are stored as data within a database. Even decision making is supported through a GUI without requiring the development of any additional code. In this approach, the application screens and fields are defined either by learning the SAP screens or by extracting the screen information directly from the SAP metadata. This information is stored as a map within the database and it is used to populate the drop-down lists as tests are defined. Test analysts can further select from predefined options for making decisions at runtime to control the process workflow. Exhibit 5.7 depicts an example GUI process editor for a code-free automation solution.

05_4782

2/5/07

10:41 AM

Page 83

Functional Test Automation

83

EXHIBIT 5.7 Example GUI Process Editor for a Code-Free Automation Solution

Aside from removing the need for test engineers to develop and maintain custom code, code-free solutions enable automated maintenance. As the application changes, the map components are compared and all differences documented, including the affected test assets. Global changes can be made automatically as well to modify or remove test steps related to changes or deletions. Since all test assets are stored as data, this can be more easily accomplished than finding impact and making changes to code. Even a code-free solution, however, should support extensibility options in the event that your implementation of SAP contains interfaces to non-SAP applications to fill gaps.

TEST LIBRARY MAINTENANCE The primary benefit of automating your SAP test processes is for future reuse as configuration changes are made or new patches or

05_4782

2/5/07

10:41 AM

Page 84

84

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

versions installed. By automatically reexecuting all of your test processes after changes, you can ensure that there have been no unintended effects. This level of test coverage can prevent costly production errors, failures, and downtime. In order to enjoy this benefit you must be able to maintain your test processes as changes are made to SAP or your configuration. If you do not update the tests each time you make a change, they will become obsolete. In the same vein, you must add new test processes or test data to verify new functionality as it is added so that your test coverage continues to expand as your usage of SAP does. One way to limit maintenance time and overhead is to adopt a framework or code-free approach so that script code maintenance is limited or eliminated entirely and most changes occur in data instead. Because maintenance is an ongoing requirement, it is critical that it be efficient. Extensive manual changes to custom-coded components may be too time-consuming or difficult, resulting in a reduced useful life for your automated tests. This means you must design your tests to be easily maintained by following development standards and naming conventions, and by enforcing change management and version control on all test assets.

Maintenance Events There are three primary events that can trigger maintenance of your test assets. The first arises when your SAP configuration changes, whether to modify screens or the business process workflow. Depending on your automation approach, this will require that your test components—whether stored in code, spreadsheets, or a database— be modified to accommodate the differences. The second maintenance event is a change to a business process due to new or different rules. The SAP screens themselves may not be modified, but the rules that govern the workflow may be changed. In a script code–based framework, this may necessitate scripting or regeneration of code; a code-free solution will need only changes to the test processes. Changes to data can cause the third type of maintenance event. This change may arise from different data in the test environment it-

05_4782

2/5/07

10:41 AM

Page 85

Functional Test Automation

85

self, or new data may be needed to exercise new process or rules. Unless you are using record and play, your test data should be located in a text file, spreadsheet, or database. In each of these cases it is important that your naming conventions or coding standards permit you to easily identify which test assets are affected by changes without individually inspecting every test process or data file. Depending on the automation technique and framework type you select, the impact of a change may be analyzed automatically or manually. Generally, assets stored as code make it more difficult to locate and make changes than assets stored as data. Similarly, data housed in a database is easier to manage and maintain than data stored in text files or spreadsheets.

Version Control Because maintenance events result in changes to test assets, it is necessary to institute version control. Prior versions of a test should be kept in case the functionality has to be rolled back, or for audit trail purposes to comply with regulatory requirements. If your tests are stored as script code, you may use a software source control system that supports check in/check out for code modules and allows you to identify differences and perform merges between versions. If your tests are stored as data in text files or spreadsheets, you may also use most software source control systems. For test assets stored in a database, make sure the database schema permits multiple versions to be maintained and compared, and if a test asset is being modified, that it is protected from being overwritten by someone else.

MANAGING TEST AUTOMATION Your test automation team will require a mix of skills, depending on the approach you have selected. Estimating the time and effort will also depend on the techniques and tools you have adopted.

05_4782

2/5/07

10:41 AM

86

Page 86

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Team Roles As described in previous sections, the code-based approaches and frameworks require a minimum of two roles: test engineers, who develop and maintain the script code components, and test analysts or designers, who construct and execute the test processes and data. Test designers should be SMEs or BAs who have domain expertise in the business processes to be tested. Test engineers need to have programming skills and either training or experience with the scripting tool of choice. Test analysts need SAP domain expertise and business process experience. If you have adopted a database repository, you will also need database administration skills. Whether your test framework is internally developed or commercially licensed, you will need to plan for training the test team on how to design and develop reusable, maintainable test processes. It is important not to skimp on training team members on naming standards and coding conventions. These are essential skills for implementing a test library that can be managed, maintained, and transferred over the long term.

Estimating Estimating the timeline for your test automation effort requires you to consider the following factors: the automation approach you adopt, the number of business processes to be executed, and the number of business rules to be verified. For example, if you select the key/action word framework approach you will need to define the inventory of key or action words that are needed, together with any custom decision-making code. Generally, if it takes one hour to record a process, it will take another five to modify the script to add variables, timing, logic, error handling, and so forth, plus another five to integrate it into the framework, test, and debug it. So a one-hour manual process will take about 10 hours to reduce to a script code component. From there, additional rules can be tested by adding rows of test data, which may be rapid if the data is already defined and slower if not. If you are developing the framework internally, add time to develop and test the infrastructure as well. A typical custom framework

05_4782

2/5/07

10:41 AM

Page 87

Functional Test Automation

87

infrastructure for a single application is about 50,000 lines of code. Plan for time to design the library, develop, test, and document it. At 25 to 50 tested lines of code (LOC) per developer per day, this translates into about four to eight person-years of development. Likewise, the screen/window approach can be estimated by counting the number of SAP screens you need to traverse, then multiplying by the number of actions you intend to support for each (e.g., input, verify, and store). Finally, automate one screen of average complexity and use it as a baseline to project the remaining effort. The class library implementation can be estimated by identifying the number of classes and related actions plus the infrastructure. There are about 12 different GUI object classes in the SAP GUI; if you provide an average of five actions for each one of approximately 50 LOC each, you will have 600 LOC for the classes and actions plus any custom code needed. After that, estimate the number of test processes and test data values needed; developing the test workflows may take from half an hour to an hour including writing, testing, and debugging. Adding test data to a workflow to exercise different rules may take only a few minutes by adding rows to a data file. Code-free approaches require estimates for the number of business processes and rules to be verified. Processes can typically be constructed in 15 minutes to half an hour depending on complexity, and test data can be added in minutes as another row in a table. This does not include any extensions for non-SAP applications. In all approaches, however, be certain to plan for gathering and analyzing the business process flows and the business rules and related data. Ideally, these were documented during the initial business process engineering phase in the form of the “to-be” processes. If not, plan time to interview application subject matter experts to extract this information. Exhibit 5.8 summarizes the estimating factors for each approach.

OUTSOURCING SCRIPTING TASKS If you adopt one of the techniques that requires test engineers—and especially if you elect to build instead of buy your framework—your organization will need skilled script coders. If you do not already have

05_4782

2/5/07

10:41 AM

Page 88

88

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBIT 5.8 Effort Estimation Factors by Approach Approach

Framework

Code Components

Data Components

Key/action word

50,000 LOC 25–50 LOC/day or licensed

# business tasks × 200 LOC each

# processes × test case variations × 1 minute per row

Screen word

50,000 LOC 25–50 LOC/day or licensed

# screens × 4 tasks × 100 LOC each

# processes × test case variations × 1 minute per row

Class library

50,000 LOC 25–50 LOC/day or licensed

10 classes × 5 actions × 50 LOC each or licensed

# processes × number of steps × 30 seconds per step plus # test case variations per process × 1 minute per row

Code-free

Licensed

Licensed

# processes × number of steps × 30 seconds per step plus # test case variations per process × 1 minute per row

these resources available, you have three options: Hire new employees, retain contractors, or outsource. Outsourcing may offer the benefits of reduced costs and access to resources already skilled in the test tool at hand. However, realize that the test designer role requires domain expertise and cannot be outsourced. The biggest challenge of outsourcing is facilitating efficient communication and project management between the designers and engineers, especially if the engineers are offshore. Be sure to include extra time for detailed, explicit test case documentation to support remote engineering. Insist on industry best coding practices such as naming standards, coding conventions, version control, and documentation, as discussed previously in this chapter: All are essential to assure the long-term viability of your automated tests.

05_4782

2/5/07

10:41 AM

Page 89

Functional Test Automation

89

Finally, plan for the results to be reviewed and analyzed by the designers since they are the owners of the processes and ultimately accountable for their accuracy.

SUMMARY Test automation is a strategic solution to assuring that your SAP implementation is accurate and reliable both the first time it goes live and after every other time that configuration or software changes are made. Thorough, automated test coverage can save millions in production errors, downtime, and loss of user productivity by detecting issues before they impact the business. So take the time to select the right tool and technique for your needs, invest the proper resources, and follow best practices so that your test automation library can serve as a long-term asset.

05_4782

2/5/07

10:41 AM

Page 90

06_4782

2/5/07

10:43 AM

Page 91

CHAPTER

6

Test Tool Review and Usage est tools are composed of test management tools for test planning, test design, and test execution and test tools for capturing manual keystrokes that can be played back with multiple sets of data. Some of the key questions to address before introducing test tools to a project are as follows:

T

1. Who will support and maintain the test tools and test management tools? 2. Who will provide end-user training for the test tools and test management tools? 3. Which business processes are good candidates for test automation? 4. Will test automation take place in-house or be outsourced to a third-party vendor? 5. How much documentation for test cases, test scripts, business process procedures (BPPs), and flow process diagrams exists within the project to support test automation? 6. Does the project have a dedicated SAP test environment and instance to support test automation? 7. How will the test management tools be customized? 8. When (realistically) can automation efforts be initiated given the project’s deadlines, constraints, and resource bandwidth? 9. How will automated test cases be approved, signed off, and maintained?

91

06_4782

2/5/07

10:43 AM

Page 92

92

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Commercial SAP test tools bring benefits that exceed mere capturing and playback to support test execution. Various vendors have produced test tools that are compatible with SAP or supplement SAP’s eCATT. The tools vary in price, scripting language, and how recorded objects are maintained. Test tools supplement the testing effort but do not replace it. Activities such as test case creation, identification of test data, drafting requirements, and mapping test cases to requirements are activities that need to be completed manually before automation is attempted. It is possible that a company may need to acquire one or more test tools from different vendors in order to support all the automation for an SAP project. Companies acquiring test tools and test management tools will need to establish a framework for deciding which processes are suitable for test automation and also identifying the necessary resources for maintaining the test tools and test management tools.

TEST TOOL REVIEW Exhibits 6.2 through 6.8 are completed surveys from vendors that have automated test solutions that are compatible with SAP R/3™. Although several vendors were asked to fill out surveys for their test tools, not all vendors responded to the survey. In choosing a test tool, it is important to remember that the test tools facilitate the testing effort but do not drive the testing effort. The methodology and approaches for testing are more important than the chosen test tool. It is unlikely that a single vendor would offer a test tool that meets all the automation challenges of an SAP project, and consequently the SAP project will have to apply an automation framework that includes one or more test tool vendors in addition to manual testing. SAP also provides its own native testing solution from its test workbench, which is eCATT. Some commercial vendors of test tools have integrated their tools with eCATT to expand, augment, or supplement its capabilities.

This refers to the ability to create an automated test without having the application available to record against. This is a necessary feature for agile development approaches that require tests to be developed before the code. Tool allows automation without the need to write, generate, or maintain programming code.

Tests can be developed concurrently with software development

No script coding required

(Continues)

This refers to the ability for the tool to inform the user of all tests that are affected by a change to an application object. Otherwise, the user must search all tests and manually locate all references.

Automated impact analysis for application changes

Tool supports recording of non SAP applications

This refers to the ability for the tool to locate all test references to an application object and automatically update them when an object is removed or renamed. Otherwise, the user has to make all the changes manually.

Automated global changes for object changes and deletion

I. Transaction Capture and Playback

Comments

10:43 AM

Criteria

Tool(s) Name: Tool Evaluator: Vendor Name: Vendor Website: Date of Evaluation: Tool Type: (i.e., SAP record/playback scripting test tool for regression, string, integration, smoke testing) (Please fill in your own. This is an example.)

2/5/07

Test Tool Evaluation Matrix

EXHIBIT 6.1 Test Tool Evaluation Matrix Template

06_4782 Page 93

93

94 Which is the underlying programming language for scripting and recording (i.e., Visual Basic)? Can recording tool invoke remote function calls? Does test tool offer the user the ability to automatically convert a captured test step into an optional step without having to add if/else scripting logic? (i.e., in SAP a given screen may or may not appear based on entered data.) Analog recording refers to recording that requires knowledge for an object’s coordinates (on the x/y axis) within the monitor screen (i.e., Citrix, DOS, Bitmaps, etc.) for analog recording. Analog recording is useful when you need to track every movement of the mouse as you drag the mouse around a screen or window.

Common scripting language (i.e., VB)

Allows RFCs to be called

Produces automatic optional steps

Has analog and object recording capabilities

Tools allow creation of a user-defined function in order to make tests or components easier to maintain.

Test tool allows for creation of user-defined functions

Test tool offers keyword-driven tests

After the script is recorded the tester can add think times and delays to each test step without having to insert/change existing programming code.

Think times can be added without programming or code changes

Keyword-driven test enables you to create and view the steps of your test or component in a modular, table format. Each step in the script is a row in the Keyword View, comprised of individual parts that one can modify.

User-defined functions, can be accessed from any test or component.

Has a repository where attributes, properties of captured or recorded objects are stored, and maintained.

Has repository for managing the properties of recorded objects

10:43 AM

Recognizing objects independent of location is digital recording. Object recording enables you to record on any object in your application, whether or not the test tool recognizes the specific object or the specific operation.

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.1 (Continued)

06_4782 Page 94

The labels that the tool captures for the recorded SAP fields can be modified.

Allows renaming of labels for captured fields

Vendor offers floating licenses

Tool installation is Web-based or required desktop GUI installation (fat or thin client installation)?

IV. Tool Installation

Allows toolbar customizations

(Continues)

The test tool allows end user to display/suppress/add fields, buttons, relocate buttons, etc.

List the actual SAP versions and SAP front end that your tool supports.

Supports SAP GUI, Citrix and Portals

III. Tool Maintenance

List the actual SAP bolt-ons that your tool supports.

Compatible with SAP bolt-ons (i.e., BW, SRM, APO, C-folders, CRM, etc.)

II. SAP Supported Versions, Applications

Vendor offers library of pre-recorded SAP scripts/processes with tool

10:43 AM

The vendor offers a library of generically or plain vanilla recorded scripts containing SAP t-codes (i.e., MIGO, CJN20N, etc.)

Example: If an SAP screen has 10 fields when it was recorded and then a new field is added the tester can modify the script to include the 11th field through the captured screen without having to modify the script’s programming language.

If tool offers captured/recorded screen, user can modify script logic through the captured screen

2/5/07

Allows adding of start and stop watches

The tool shows and displays the screen that was captured during the initial recording.

Has interactive captured screen of captured/ recorded process

06_4782 Page 95

95

96 Does the recording tool or test management tool integrate with SAP’s Solution Manager and, if so, which one?

Integrates with Solution Manager

The recording tool has API integration to SAP’s eCATT. Or the scripting tool can be enhanced with eCATT. Can test data be shared across from eCATT to recording tool?

Integrates with eCATT

Decision-making options for each test step on pass or fail

VI. Tool Execution

Open API to integrate with other tools, languages

This refers to the tool’s ability to let users make decisions based on the result of each step at runtime and control the test workflow as a result, without writing any code.

This refers to the ability of the tool to integrate with any other technology needed to execute an end-to-end test.

Does test management tool allow one to keep track of multiple versions of the same script, have check-in/out features, offer date/time stamp, script status, etc.?

If test management tool exists, does it offer version-control capabilities? Or integrate with third-party tool for version control?

Integrates with test tools other than eCATT

Recorded scripts can be stored within a test management tool (i.e., TestDirector, QACenter). If so, which test management tool?

Integrates with test management tool

10:43 AM

If tool integrates with Solution Manager, capabilities exist to execute recorded scripts from Solution Manager

This refers to the fact that tests are stored as data and not as script code.

Stores test assets in MSDE, SQL Server, or Oracle

V. Tool Integration

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.1 (Continued)

06_4782 Page 96

If a script has multiple iterations and one of them fails, the script can be instructed to skip the failed iteration and proceed to the next one.

Capabilities to run unattended and skip failed iterations

Can scripts from test tool be executed from the scheduler in a given sequence (i.e., run script Y before scripts X, and Z every Friday at 11:00 A.M., but if script Y fails do not execute script Z)? The tool has capabilities to set up “watch variables,” has a compiler, allows tracing. This means that the script playback time automatically adjusts to the SAP response times so that the script does not get ahead of SAP during execution (playback) in the event that the SAP server is experiencing delays/lags.

Scheduling tool offers execution with dependencies

Contains debugger

Allows for automatic synchronization between client and server

The tool framework has an automated way of repositioning the application to a known state after a previous test failure so the test session can continue with the next text. The tool automatically records timing intervals at every level of detail without requiring the use of stopwatches or other coding techniques.

Built-in context recovery capability

Automatic timing for each step, process, and suite

(Continues)

The tool framework has an automated way of responding to application or test errors without requiring programming code.

Built-in error handling capability

Tool offers automatic timing synchronization management.

Does the test tool offer scheduling capabilities? Can recorded scripts be executed from a scheduler (i.e., run script X every Friday at 10:00 A.M.)?

10:43 AM

Has scheduling capabilities

2/5/07

Runs scripts in background and foreground mode

This means the tool allows the user complete control over the execution process as well as visibility into the value of date at any point, without having to view or interact with programming code.

Execution control allows single-step, breakpoints, screen captures, variable and data monitoring

06_4782 Page 97

97

98 The tool allows users to sort and select based on the value of any test element to quickly locate and view desired tests from a large inventory. Test data is not stored in spreadsheets or other desktop tools, but instead is stored in a central database that is easily shared and controlled. Tests can either retrieve data from a database during execution or verify the value in a database without writing any code. Captured text can be formatted, manipulated within the test tools data sheets (i.e., the status bar message “Sales Order 001” can be formatted to extract only the value “001” through Excel-based formulas). The recording tool contains data sheets for manipulating script data, or capturing script data. The tool can work with data residing in external data files (i.e., tab delimited, .txt, etc.) Can objects and text be verified during script playback? (For example, check that an enter button is disabled; check that quantity on hand is at least 10 before carrying out the sales order.) Matches characters from captured or evaluated text based on a pattern (i.e., verify all SAP sales order numbers from the status bar message starting with a “5” such as Sales Order Number 5*). Data captured or generated during the script playback (execution) can be sent to an external data file. This means that data can be passed from one recorded SAP t-code to the next within the same script (i.e., a single script is recorded to pass data from Sales Order t-code (VA01) to Delivery t-code (VL01)).

User-defined data filtering on all views

All test assets stored as data in relational database

Database verification and data acquisition

Provides Excel-based function (i.e., TRIM, MID, etc.) to clean up captured text

Data-driven tests (i.e., pulls data from spreadsheets, external sources, etc.)

Allows for verification points (objects, database values, text)

Tool offers regular expressions (i.e., text character matching)

Capabilities for creating external data files

Allows data seeding and data correlation

VII. Tool Data

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.1 (Continued)

06_4782 10:43 AM Page 98

Data access method can be sequential, random, etc. . . .

Provides playback with multiple data access methods (i.e., random)

Result logs store screen captures

Shows screen captured during initial script recording.

(Continues)

Is there an examination that testers can take to demonstrate proficiency and skill level in the test tool?

Vendor offers certification examination in test tool

XI. Test Reporting and Review

Does the vendor offer an official training program with instructors, books, class exercises?

Is there a vendor website from which patches can be downloaded?

Vendor offers test tools training

X. Training

SAP Corporation has formally certified the tool

Vendor offers web-based patches, downloads to upgrade tool

IX. Vendor Support

Allows SAP roles-based testing

User and group security and permissions for each test asset component

10:43 AM

The test repository assets can be managed according to users and group security and permissions to control access based on user roles.

Text from SAP can be captured and stored on a spreadsheet (i.e., status bar messages, informational screen text, text within an SAP grid, text for a field, text from a drop-down list, etc.).

Captures screen text (i.e., status bar messages)

2/5/07

VIII. Tool Security

Variables of type INT, FLOAT, CHAR, etc. can be declared.

Allows variable declaration

This is the stringing together of SAP transactions within the same script or automated test case.

06_4782 Page 99

99

100 Shows the status for each iteration (i.e., out of 10 iterations 9 passed).

Results log shows status for each row of data (iteration)

Users can easily develop their own database queries and present the results in charts or reports. As the test is developed, English-like documentation is automatically produced and maintained. There is no need for a separate documentation step. All test assets can be exported from the database into a text file or other format for integration with other tools. The user can extend the capability of the tool to automate custom and thirdparty controls. Refers to the test management capability and to the ability to add data elements as text, combo-box, checkbox, etc.

User-defined query and reporting or charting capability

English-narrative documentation produced automatically from test processes

Export to text capability for all test assets

User-extensible classes, actions, and functions

User can extend interface with unlimited new attribute fields

It means you can pass the names of these components as variables to allow the test data to control the flow and content of test execution.

Supports full indirection for all test processes and data file names

Is the tool language and platform independent?

Single or multiple requirements can be traced to a single or many tests.

Allows many-to-one and one-to-many requirements traceability

This allows the user to customize the information that is captured and reported so that it conforms to the user’s internal processes and terminology.

Results logs show both actual and expected results. Results logs show whether a verification point passed.

Creates automatic test results files (test logs)

10:43 AM

Results log can be saved in different formats (i.e., HTML, .doc)

Results log includes date and time stamp

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.1 (Continued)

06_4782 Page 100

101

N/A. Not a recording test tool. Visual Basic scripting and ABAP development language for Validation Engine. Yes. Yes, through the Atlas Library Wrapper. N/A. Yes, through the front end GUI test tool like Quick Test Pro.* Yes, through the front end GUI test tool like Quick Test Pro.* Yes, Validation Engine allows for creation of used defined functions.

No script coding required

Tool supports recording of non-SAP applications

Common scripting language (i.e., VB)

Allows RFCs to be called

Produces automatic optional steps

Has analog and object recording capabilities

Has repository for managing the properties of recorded objects

Think times can be added without changing programming code

Test tool allows for creation of user defined functions

(Continues)

No. Yes, allows complex automatic validation without any coding. Removes the need for coding in GUI test tool.

Tests can be developed concurrently with software development

Yes—test assets are object-based.

Automated global changes for object changes and deletions

I. Transaction Capture and Playback

Comments/Responses

10:43 AM

Criteria

2/5/07

Tool Evaluator: Bob Koche Vendor Name: Arsin Corporation Vendor Website: www.arsin.com Date of Evaluation: 02/09/06 Tool Offerings: SAP Test Automation Library, Validation Engine and Automated Test Management & Creation Toolset—works in conjunction with test Automation tools from Mercury and SAP

EXHIBIT 6.2 Test Tool Evaluation Matrix (Vendor: Arsin Corporation)

06_4782 Page 101

102 Yes. Yes. No. Yes—Extensive Component Library of Transactions and Processes.

Has interactive captured screen of captured/recorded process

If tool offers captured/recorded screen, user can modify script logic through the captured screen

Allows renaming of labels for captured fields

Allows adding of start and stop watches

Vendor offers library of prerecorded SAP scripts/processes with tool

Web-based application. Yes.

Tool is web-based or requires desktop GUI installation (fat or thin client installation)?

Vendor offers floating licenses

IV. Tool Installation

Allows toolbar customizations

No.

No answer provided.

Supports different versions of SAP (i.e., SAP GUI, Citrix, Netweaver, and Portals)

III. Tool Maintenance

Yes, all SAP ERP and SAP New Dimension Products such as APO, CRM, BW, etc.

Compatible with SAP bolt-ons (i.e., BW, SRM, APO, C-folders, CRM, etc.)

10:43 AM

II. SAP Supported Versions, Applications

No. Yes, through the front end GUI test tool Quick Test Pro.*

Test tool offer keyword-driven tests

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.2 (Continued)

06_4782 Page 102

Not at present time. Planned for future release. N/A. Yes. Integrates with Mercury Interactive’s TestDirector** (Quality Center). Yes, depending on test management tool that Atlas integrates with.

Yes. Yes, with Mercury’s Quick Test Pro* and Business Process Testing.*** Yes.

Integrates with Solution Manager

If tool integrates with solution manager, capabilities exist to execute recorded scripts from Solution Manager

Integrates with test management tool

If integration with test management tool exists, does it offer version-control capabilities? Or integrate with third-party tool for version control?

Integrates with eCATT

Integrates with test tools other than eCATT

Open API to integrate with other tools, languages

103

The current version supports simple group scheduling. A future version will support conditional scheduling.

Scheduling tool offers execution with dependencies

(Continues)

Yes.

Yes, can run unattended and skip failed iterations.

Capabilities to run unattended and skip failed iterations

Yes.

Not under the current release.

Automated impact analysis for application changes

Has scheduling capabilities

No.

Runs scripts in background and foreground mode

Not under the current release.

Execution control allows single-step, breakpoints, screen captures, variable and data monitoring

10:43 AM

Decision-making options for each test step on pass or fail

2/5/07

VI. Tool Execution

Yes—any DBMS.

Stores test assets in MSDE, SQL Server, or Oracle

V. Tool Integration

06_4782 Page 103

104 N/A. Yes, through the front end GUI test tool Quick Test Pro.* Yes. Not under current release. Timing is tracked for each step.

Contains debugger

Allows for automatic synchronization between client and server

Built-in error handling capability

Built-in context recovery capability

Automatic timing for each step, process, and suite

Yes, through front end GUI test tools like Quick Test Pro.* Yes.

Yes, this is a feature of the Effecta™ Validation Engine.

Tool offers regular expressions (i.e., text character matching)

Captures screen text (i.e., status bar messages)

Yes. The Validation Engine provides great flexibility and coverage of all standard and custom SAP objects.

Allows for verification points (objects, database values, text)

Allows variable declaration

Yes. Pulls data from external files, databases, and/or spreadsheets.

Data-driven tests (i.e., pulls data from spreadsheets, external sources, etc.)

Yes.

Not Excel-based. A complete library of functions is available for validation purposes.

Provides excel-based function (i.e., TRIM, MID, etc.) to clean up captured text

Yes.

Yes.

Database verification and data acquisition

Allows data seeding and data correlation

Yes—all objects created using Atlas™ are stored in DBMS.

All test assets stored as data in relational database

Capabilities for creating external data files

No.

User-defined data filtering on all views

10:43 AM

VII. Tool Data

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.2 (Continued)

06_4782 Page 104

Yes.

Allows SAP roles-based testing

Not at this moment. This is in process.

SAP Corporation has formally certified the tool

Yes.

Vendor offers certification examination in test tool

105

Yes, results show the status of each validation step in test script. Yes. Yes, as a MS Word file (.doc). Yes—reports on test results are provided. Results can be fed into Mercury Interactive’s test management application (TestDirector** or Quality Center).

Results log show status for each row of data (iteration)

Results log include date and time stamp

Results log can be saved in different formats (i.e., HTML, .doc)

Creates automatic test results files (test logs)

(Continues)

Yes, through front end GUI test tools like Mercury Interactive’s Quick Test Pro.*

Results logs store screen captures

XI. Test Reporting and Review

Yes.

Vendor offers test tools training

X. Training

Yes, patches available via the Web.

Vendor offers web-based patches, downloads to upgrade tool

10:43 AM

IX. Vendor Support

Yes, for validation objects.

User and group security and permissions for each test asset component

No.

2/5/07

VIII. Tool Security

Provides playback with multiple data access methods (i.e., random)

06_4782 Page 105

106 User -defined querying is not available under the current release. However, canned reporting is available. Not under the current release. Yes. Yes. No. Yes. No. Not language independent. Yes, it is platform independent.

User-defined query and reporting or charting capability

English-narrative documentation produced automatically from test processes

Export to text capability for all test assets

User-extensible classes, actions, and functions

User can extend interface with unlimited new attribute fields

Allows many-to-one and one-to-many requirements traceability

Supports full indirection for all test processes and data file names

Language and platform independent

10:43 AM

* Assumes Quick Test Pro has been acquired. ** Assumes TestDirector (Quality Center) has been acquired. *** Assumes BPT has been acquired. Reprinted with permission from Arsin Corporation.

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.2 (Continued)

06_4782 Page 106

107

Yes. Auto Command facility allows user to create scripts without coding. Yes. Also tests GUI, Host/Legacy, and Web applications. The underlying programming language is proprietary English-like language designed for nonprogrammers or business users. AutoTester can make calls to user-defined DLLs. Logical operators for screen identification can be added at will. There is no compilation of the script. A captured testing step can be edited to reflect the possibility that it may not be present and steps added to compensate for this.

Tests can be developed concurrently with software development

No script coding required

Tool supports recording of non-SAP applications

Common scripting language (i.e., VB)

Allows RFCs to be called

Produces automatic optional steps

(Continues)

User-defined through variables. If a global change occurs, only the variable needs to be modified.

Automated global changes for object changes and deletions

I. Transaction Capture and Playback

Comments/Responses

10:43 AM

Criteria

2/5/07

Tool(s) Name: AutoTester ONE Special Edition for SAP Tool Evaluator: Mundy Peale and Michael Vils Vendor Name: Autotester, Inc. Vendor Website: www.autotester.com Date of Evaluation: 02/02/2006 Tool Offerings: SAP record/playback scripting test tool for regression and integration testing

Test Tool Evaluation Matrix

EXHIBIT 6.3 Test Tool Evaluation Matrix (Vendor: Autotester, Inc.)

06_4782 Page 107

108 Comments/Responses AutoTester One supports both analog and object recording. Doesn’t have a repository—attributes and properties of captured or recorded objects are stored in the script itself. After the script is recorded the tester can add think times and delays to each test step without having to insert/change existing programming code. Tools allow creation of user-defined functions (sub routines or called modules) in order to make tests or components easier to maintain. User-defined functions, can be accessed from any test. Test Scripts can be viewed in icon-based views with simplified descriptive text of testing steps or viewed with the full underlying code. Scripts in the icon-based view can group testing steps as well. All steps can be edited at will by the user. Full screens are not captured. Individual component and user interactions are. If an SAP screen had 10 fields when it was recorded and then a new field is added, the tester can modify the script to include the 11th field through simple editing within the script. Yes. Time stamping and transaction timings can be added. N/A.

Has analog and object recording capabilities

Has repository for managing the properties of recorded objects

Think times can be added without changing programming code

Test tool allows for creation of user-defined functions

Test tool offer keyword driven tests

Has interactive captured screen of captured/recorded process

If tool offers captured/recorded screen, user can modify script logic through the captured screen

Allows renaming of labels for captured fields

Allows adding of start and stop watches

Vendor offers library of prerecorded SAP scripts/processes with tool

2/5/07

Criteria

EXHIBIT 6.3 (Continued)

06_4782 10:43 AM Page 108

Recorded scripts can be stored within a test management tool called Test Organizer, which is integrated with AutoTester One.

Integrates with test management tool

(Continues)

N/A.

If tool integrates with Solution Manager, capabilities exist to execute recorded scripts from Solution Manager

N/A.

V. Tool Integration Stores test assets in MSDE, SQL Server, or Oracle

Test management tool (Test Organizer) does not integrate with SAP’s Solution Manager.

Yes.

Vendor offers floating licenses

Integrates with Solution Manager

Resides as a GUI tool on the Windows desktop—fat client installation.

IV. Tool Installation Tool is web-based or requires desktop GUI installation (fat or thin client installation)?

Allows toolbar customizations

10:43 AM

The test tool does not allow end user to display/suppress/add fields, buttons, relocate buttons, etc.

SAP® R/3® client software Version 4.0B, 4.5a, 4.5b, 4.6b, 4.6c, 4.6d, 6.10, or 6.20.

Supports different versions of SAP (i.e., SAP GUI, Citrix, Netweaver, and Portals)

2/5/07

III. Tool Maintenance

Compatible with any GUI or web-based application. Does not have to be SAP based.

Compatible with SAP bolt-ons (i.e., BW, SRM, APO, C-folders, CRM, etc.)

II. SAP Supported Versions, Applications

06_4782 Page 109

109

110 Test Organizer does provide version-control capabilities. Test Organizer has as a check-in/out feature, script status, and management coverage reporting of testing results. AutoTester One does not integrate with eCATT—integrates with the Scripting Facility. N/A. Yes—Proprietary.

If integration with test management tool exists, does it offer version-control capabilities? Or integrate with thirdparty tool for version control?

Integrates with eCATT

Integrates with test tools other than eCATT

Open API to integrate with other tools, languages

User defined. Yes. N/A. If a script has multiple iterations and one of them fails, the script can be instructed to skip the failed iteration and proceed to the next one. Foreground only. AutoTester One has a scheduling module. Scripts can be set to run at specific times or in a countdown mode. Scripts can be executed from the scheduler in a given sequence. Dependencies are scripted. Not a component of AutoTester One.

Decision-making options for each test step on pass or fail

Execution control allows single-step, breakpoints, screen captures, variable and data monitoring

Automated impact analysis for application changes

Capabilities to run unattended and skip failed iterations

Runs scripts in background and foreground mode

Has scheduling capabilities

Scheduling tool offers execution with dependencies

Contains debugger

10:43 AM

VI. Tool Execution

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.3 (Continued)

06_4782 Page 110

User defined and scripted—not automatic. User defined and scripted—not automatic. Relative execution times are notated automatically in results file.

Built-in error handling capability

Built-in context recovery capability

Automatic timing for each step, process, and suite

Test assets can be stored in the Test Organizer module of AutoTester One, which uses a relational database. N/A. Captured text can be formatted, and manipulated within the variable that stores the data (proprietary). Data read from external files can also be formatted. The tool can work with data residing in external data files (i.e., comma delimited, .txt file, and Excel spreadsheets, etc.). Objects and text can be verified during script playback (i.e., check that an enter button is disabled; check that quantity on hand is at least 10 before carrying out the sales order etc.). Matches characters from captured or evaluated text based on a pattern (i.e., verify all SAP sales order numbers from the status bar message starting with a “5” such as Sales Order Number 5*).

All test assets stored as data in relational database

Database verification and data acquisition

Provides, Excel-based function (i.e., TRIM, MID, etc.) to clean up captured text

Data driven tests (i.e., pulls data from spreadsheets, external sources, etc.)

Allows for verification points (objects, database values, text)

Tool offers regular expressions (i.e., text character matching)

(Continues)

Test Organizer provides filtering capabilities on all reporting views.

10:43 AM

User-defined data filtering on all views

2/5/07

VII. Tool Data

Script playback time automatically adjusts to the SAP response times so that the scripts do not get ahead of SAP during execution (playback) in the event that the SAP server is experiencing delays/lags.

Allows for automatic synchronization between client and server

06_4782 Page 111

111

112 Not offered within tool. Excel data file format and text files are supported. Data can be passed from one recorded SAP t-code to the next within the same script (i.e., a single script is recorded to pass data from Sales Order t-code (VA01) to Delivery t-code (VL01)).

Capabilities for creating external data files

Allows data seeding and data correlation

Text from SAP can be captured and stored on a spreadsheet (i.e., status bar messages, informational screen text, text within an SAP grid, text from a field, text from a drop-down list, etc.). Data access method can be sequential, random, etc . . . .

Captures screen text (i.e., status bar messages)

Provides playback with multiple data access methods (i.e., random)

Vendor offers web-based patches, downloads to upgrade tool

Website exists from which patches and product updates can be downloaded.

Yes.

Allows SAP roles-based testing

IX. Vendor Support

The test organizer has permission levels for access to components stored within the project.

User and group security and permissions for each test asset component

VIII. Tool Security

Variables can be text, numeric, object, bitmap, macro, or date and do not have to be declared.

Allows variable declaration

10:43 AM

Multiple SAP transactions can be included within the same script.

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.3 (Continued)

06_4782 Page 112

Online and on-site training are offered. There a basic training class certificate available after completion of training.

Vendor offers test tools training

Vendor offers certification examination in test tool

Test Organizer provides several reporting capabilities. Some descriptive text is included in the script during the recording process. Specific additional text can be added at will to create a documented test case within the script. The text-based script becomes the document. Scripts are in text format.

User-defined query and reporting or charting capability

English-narrative documentation produced automatically from test processes

Export to text capability for all test assets

(Continues)

Results can be exported to text formats. Result logs show both actual and expected results. Results logs show whether a verification point passed.

Date and time stamps are included.

Results logs include date and time stamp

Creates automatic test results files (test logs)

Shows the status for each iteration (i.e., out of 10 iterations, 9 passed). All iterations are user defined.

Results logs show status for each row of data (iteration)

Results logs can be saved in different formats (i.e., HTML, .doc)

Shows screen/data captured during script recording (benchmark) and screen/data captured during playback.

Results logs store screen captures

10:43 AM

XI. Test Reporting and Review

2/5/07

X. Training

Certified using GUILIB testing interface with SAP versions up to 4.5b. Scripting Facility support does not have a formal certification from SAP. SAP 4.7 testing certification is only available for eCATT integration.

SAP Corporation has formally certified the tool

06_4782 Page 113

113

114 Requirements traceability is available in the Test Organizer module of AutoTester One. Variable names and file names have indirection capability. AutoTester One works with SAP interfaces in other languages. Only works in Windows-based platforms.

User can extend interface with unlimited new attribute fields

Allows many-to-one and one-to-many requirements traceability

Supports full indirection for all test processes and data file names

Language and platform independent

10:43 AM

Reprinted with permission from Autotester, Inc.

N/A. N/A.

User-extensible classes, actions, and functions

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.3 (Continued)

06_4782 Page 114

No. Yes, analog (positional mouse clicks only) can be captured when objects are not available. Tool can also perform BitMapSelects, meaning captured Bitmaps can be stored and automated like objects.

Produces automatic optional steps

Has analog and object recording capabilities

115

(Continues)

Object-level recording for multiple technologies is standard.

Microsoft VBA 6.2 (Editor and Language)

SAP is one environment supported. Others include Microsoft, Java, Web, Oracle ERP. No purchase of “plug-ins” required.

Tool supports recording of non-SAP applications

Can call BAPIs via ActiveX interface.

N/A. Scripting is done within Microsoft VBA.

No script coding required

Allows RFCs to be called

No.

Tests can be developed concurrently with software development

Common scripting language (i.e., VB)

Object Mapping is 100% centralized.

Automated global changes for object changes and deletions

I. Transaction Capture and Playback

Comments/Responses

10:43 AM

Criteria

2/5/07

Tool(s) Name: TestPartner (Part of the QACenter TestSuite) Tool Evaluator: Brian Hurst Vendor Name: Compuware Corporation Vendor Website: www.compuware.com Date of Evaluation: 02/02/06 Tool Offerings: Functional Testing for SAP

Test Tool Evaluation Matrix

EXHIBIT 6.4 Test Tool Evaluation Matrix (Vendor: Compuware Corporation)

06_4782 Page 115

116 No. This is targeted functionality for a future release of TestPartner.

If tool offers captured/recorded screen, user can modify script logic through the captured screen

BW, APO, SRM. SAP WinGui—6.20 & 6.40; SAP Portal is tolerated. Apps deployed on Citrix can be tested at the Citrix Server Layer. Load testing can be performed on Native SAP or Citrix protocols.

Compatible with SAP bolt-ons (i.e., BW, SRM, APO, C-folders, CRM, etc.)

Supports different versions of SAP (i.e., SAP GUI, Citrix, Netweaver, and Portals)

II. SAP Supported Versions, Applications

No. Compuware services can be employed to perform/assist with rapid script creation.

No. This is targeted functionality for a future release of TestPartner.

Has interactive captured screen of captured/recorded process

Vendor offers library of prerecorded SAP scripts/processes with tool

Keyword testing provided through strategic integrated partner; ScriptTech.

Test tool offers keyword-driven tests

Yes, via Object Mapping.

Functions can be created and made available to all scripts in all projects. Function capabilities follow Microsoft VBA.

Test tool allows for creation of user-defined functions

Yes, clock functions are available.

Think times can be captured during recording.

Think times can be added without changing programming code

Allows adding of start and stop watches

TestPartner stores objects in the Object Map, also located in the Repository.

Has repository for managing the properties of recorded objects

10:43 AM

Allows renaming of labels for captured fields

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.4 (Continued)

06_4782 Page 116

Yes, all license sales are concurrent in nature.

Vendor offers floating licenses

Test Requirements can be automatically built from the Process Model in Solution Manager. eCATT can launch and store scripts of TestPartner via SAP Certified integration. TestPartner stores all test scripts in a database by default. Test Management (QACenter) controls the execution and reporting. (i.e., Scripts are not physically moved for Test Management purposes.) Test assets can be exported to files for import into a versioncontrol package. Certified eCATT integration by SAP. No. But Compuware’s QACenter for test management integrates with multiple requirements management tools.

Integrates with Solution Manager

If tool integrates with solution manager, capabilities exist to execute recorded scripts from Solution Manager

Integrates with test management tool

If integration with test management tool exists, does it offer version-control capabilities? Or integrate with third-party tool for version control?

Integrates with eCATT

Integrates with test tools other than eCATT

(Continues)

Yes, all assets are stored in Access, SQL Server, or Oracle.

Stores test assets in MSDE, SQL Server, or Oracle

10:43 AM

V. Tool Integration

Desktop install, (Fat Client).

Tool is Web-based or requires desktop GUI installation (fat or thin client installation)?

Toolbars can be configured and/or moved. External tools can be added to menus.

2/5/07

IV. Tool Installation

Allows toolbar customizations

III. Tool Maintenance

06_4782 Page 117

117

118 References can be added to external applications/libraries via the VBA ‘Add Reference’ functionality. QACenter Test Management integrates with third-party requirements tools CaliberRM, RequisitePro, DOORS, and Compuware SteelTrace.

Open API to integrate with other tools, languages

This can be accomplished via scripting. Foreground only for functional tests. Background for load tests. QACenter Portal includes scheduling of execution (time/date) and repetitive execution (i.e., daily, weekly, monthly). Remaining tests can be executed if fail occurred, or abort all tests on first failure option may be used. TestPartner: uses Microsoft VBA scripting environment and language. Synchronization is handled automatically, and timeout time values can be set globally. Yes. Not built in; however, logic can be built and embedded through scripting.

Has scheduling capabilities

Scheduling tool offers execution with dependencies

Contains debugger

Allows for automatic synchronization between client and server

Built-in error handling capability

Built-in context recovery capability

No. However SAP Transports can be analyzed to detect t-codes that are affected through a system change.

Automated impact analysis for application changes

Runs scripts in background and foreground mode

Scripting environment is Microsoft VBA and includes all debugging capabilities of Visual Basic.

Execution control allows single-step, breakpoints, screen captures, variable and data monitoring

Capabilities to run unattended and skip failed iterations

No.

Decision-making options for each test step on pass or fail

10:43 AM

VI. Tool Execution

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.4 (Continued)

06_4782 Page 118

Not Excel-based. However, all VBA string commands are available. ActiveData wizard provides TestPartner capabilities to create variables within scripts that are compatible with these formats: .xls, txt, or csv as source. Standard checkpoints include: Text, Bitmap, Property, Content, and Clock. Custom checks can be scripted—UserChecks. In text comparison option such as Any Valid Number or Number within Range, or match of characters (any alpha, numeric) patterns can be built. Yes. ActiveData wizard allows the parameterization of recorded scripts, which creates the Excel file with row 1 of data equal to recorded data. Furthermore, data can be written to the datafile at runtime. Optimized approach is sharing data between scripts is read/write to Microsoft Excel columns. Other options exist. Yes. Variable types and declaration rules are provided by Microsoft VBA.

Database verification and data acquisition

Provides Excel-based function (i.e., TRIM, MID, etc.) to clean up captured text

Data-driven tests (i.e., pulls data from spreadsheets, external sources, etc.)

Allows for verification points (objects, database values, text)

Tool offers regular expressions (i.e., text character matching)

Capabilities for creating external data files

Allows data seeding and data correlation

Allows variable declaration

10:43 AM

(Continues)

Yes, all assets are stored in Access, SQL Server, or Oracle. Yes, via Compuware’s FileAid.

All test assets stored as data in relational database

Yes.

User-defined data filtering on all views

Timings are captured at Suite and Script level.

2/5/07

VII. Tool Data

Automatic timing for each step, process, and suite

06_4782 Page 119

119

120 Text from SAP can be captured and stored on a spreadsheet (i.e., status bar messages, informational screen text, text within an SAP grid, text for a field, text from a drop-down list, etc.). Data can be read in sequential or random order. Furthermore, specific data rows (e.g., only rows 5–12 out of 20 rows of data) can be specifically utilized.

Captures screen text (i.e., status bar messages)

Provides playback with multiple data access methods (i.e., random)

Yes.

Allows SAP roles-based testing

Yes.

SAP Corporation has formally certified the tool

Results logs store screen captures

No screen captures in current release. Targeted future functionality.

Certification standards are under development.

Vendor offers certification examination in test tool

XI. Test Reporting and Review

Yes.

Vendor offers test tools training

X. Training

Yes, through Compuware Frontline.

Vendor offers web-based patches, downloads to upgrade tool

IX. Vendor Support

Yes. Role-based security at the project level.

User and group security and permissions for each test asset component

10:43 AM

VIII. Tool Security

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.4 (Continued)

06_4782 Page 120

TestPartner is used globally and has been localized for doublebyte characters and unicode.

Language and platform independent

(Continues)

Yes.

Reprinted with permission from Compuware Corporation.

Yes, QACenter works on a requirements driven test strategy.

Exporting of tests (and related assets) are exportable to XML.

Export to text capability for all test assets

Supports full indirection for all test processes and data file names

No.

English-narrative documentation produced automatically from test processes

Allows many-to-one and one-to-many requirements traceability

Over 35 customizable reports standard in QACenter. Data can be exported to XML and text for third-party reporting.

User-defined query and reporting or charting capability

Yes. Shared and class modules.

Logs show both actual and expected results. Results logs show whether a verification point passed.

Creates automatic test results files (test logs)

N/A.

Results can be exported to these formats: HTML, TXT, CSV, XML.

Results log can be saved in different formats (i.e., HTML, .doc)

10:43 AM

User can extend interface with unlimited new attribute fields

Test start and test end is included in the Web result summary.

Results log include date and time stamp

2/5/07

User-extensible classes, actions, and functions

All iterations are logged, results not grouped by data row iteration.

Results log show status for each row of data (iteration)

06_4782 Page 121

121

122 Allows for test automation to take place concurrently with development. And if any transaction must be recaptured due to major changes late in the development cycle, the transaction can simply be recaptured without scripting, and all the transactions around that transaction in the modeled business process automatically update their links and dependencies to/with that replaced transaction without the need for scripting. User runs transactions using the SAP GUI as they normally would, and transactions are automatically captured and automated without requiring any scripting. Sucid products support only SAP test automation. Our product line has been built and architected from the ground up specifically for SAP.

No script coding required

Tool supports recording of non SAP applications

No.

Tests can be developed concurrently with software development

Automated global changes for object changes and deletions

I. Transaction Capture and Playback

Comments/Responses

10:43 AM

Criteria

Tool(s) Name: Sucid Process Modeler, Sucid Function, Sucid Load, Sucid Security, Sucid Reports Tool Evaluator: Giles Samoun Vendor Name: Sucid Corporation Vendor Website: www.sucid.com Date of Evaluation: 02/01/06 Tool Offerings: SAP Test Automation Tool for Functional testing (unit, integration, regression), load testing (load, stress, performance), and security testing (positive, negative)

2/5/07

Test Tool Evaluation Matrix

EXHIBIT 6.5 Test Tool Evaluation Matrix (Vendor: Sucid Corporation)

06_4782 Page 122

Analog recording capabilities are not available. Object recording capabilities without the need to write or maintain any scripts. Stores the automated test cases’ associated properties within a database. Think times can be changed without any scripting/coding. N/A. This question is not applicable to Sucid products, which are not scriptingbased in the first place. No. A tabular interface for making variables out of SAP fields is used for automated test cases, which seems at least similar to this. Displays each screen captured as the transaction is automated. Supports this capability, but again does so without scripting, as scripting is not required. Product treats captured SAP data fields by their native SAP names. Enables the user to check screen response times as a functional or load test condition without requiring any scripting.

Has analog and object recording capabilities

Has repository for managing the properties of recorded objects

Think times can be added without changing programming code

Test tool allows for creation of user-defined functions

Test tool offers keyword-driven tests

Has interactive captured screen of captured/ recorded process

If tool offers captured/recorded screen, user can modify script logic through the captured screen

Allows renaming of labels for captured fields

Allows adding of start- and stopwatches

(Continues)

Product does not currently support this functionality, but it is planned for a future release.

Produces automatic optional steps

10:43 AM

Does not include a library of such scripts.

Supports capture, modeling and test execution of machine-generated transactions, such as those submitted through RFC or BAPI interfaces with the same functionality as that offered for testing user-generated transactions.

Allows RFCs to be called

2/5/07

Vendor offers library of pre-recorded SAP scripts/processes with tool

N/A. Product is not a script-driven tool, so this is not applicable.

Common scripting language (i.e., VB)

06_4782 Page 123

123

124 Vendor offers simple monthly subscription pricing model. Pricing model is based on number of automated transactions as opposed to per-seat style licenses.

Vendor offers floating licenses

Stores test assets in a variety of relational databases: Oracle, SQL Server, and others. No integration with Solution Manager.

Stores test assets in MSDE, SQL Server, or Oracle

Integrates with Solution Manager

V. Tool Integration

Product uses a thin client approach. The user interface runs as part of SAP itself, so users access Sucid features from SAP as they would with any other SAP feature.

Tool is Web-based or requires desktop GUI installation (fat or thin client installation)?

IV. Tool Installation

Allows toolbar customizations

Toolbar customizations are not offered.

Sucid function supports SAP GUI. Support for the SAP Web interface is currently not available.

Supports different versions of SAP (i.e., SAP GUI, Citrix, Netweaver, and Portals)

10:43 AM

III. Tool Maintenance

Beyond the traditional SAP ERP core (including all its modules such as FI, SD, MM, etc.), the product is compatible with other SAP modules such as CRM, BW, APO, and MDM/ALE.

Compatible with SAP bolt-ons (i.e., BW, SRM, APO, C-folders, CRM, etc.)

II. SAP Supported Versions, Applications

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.5 (Continued)

06_4782 Page 124

Automated tests are stored within Sucid products—not in a separate test management tool. The Sucid product line includes its own built-in test management tool. The built-in test management tool does not include versioning features.

Sucid function integrates with eCATT interfaces and leverages eCATT capabilities seamlessly. Does not ship with prebuilt integration into any test tool other than eCATT. Product exposes interfaces through RFC and Java APIs.

Integrates with test management tool

If integration with test management tool exists does it offer version control capabilities? Or integrate with third-party tool for version control?

Integrates with eCATT

Integrates with test tools other than eCATT

Open API to integrate with other tools, languages

125

Allows user to visually step through any automated test one screen at a time to see exactly what is happening at each step on each screen, enabling visual review and verification of screens, SAP data field values, SAP messages, etc. Does not supply automated impact analysis. Product may be run in unattended mode. Product will execute later iterations if earlier iteration fails. Run tests in the foreground on any user’s PC, with each step/screen displayed visually on the user’s display for visual verification of test execution and results; alternatively tests can be run in the background on a test server.

Execution control allows single-step, breakpoints, screen captures, variable and data monitoring

Automated impact analysis for application changes

Capabilities to run unattended and skip failed iterations

Runs scripts in background and foreground mode

(Continues)

No.

10:43 AM

Decision-making options for each test step on pass or fail

2/5/07

VI. Tool Execution

N/A. Not yet integrated.

If tool integrates with solution manager, capabilities exist to execute recorded scripts from Solution Manager

06_4782 Page 125

126 Comments/Responses Does not offer a built-in GUI-based scheduler, but it does offer interfaces enabling simple scripts to invoke test runs at scheduled times. The dependency logic would need to be built into the scheduling script by the user. Sucid function enables user to visually step through any automated test to see exactly what is happening at each step on each screen to visually debug and diagnose problems that may develop either in the automated tests themselves or in the SAP applications or data. The product’s orchestration server maintains a real-time database with the state of all tests and automatically ensures that transactions are executed in the correct order with the proper think times. Each transaction must complete before the next transaction begins, or before the next think time delay begins if there is a specified think time, ensuring continuous synchronization and accurate simulation of real-world usage patterns. Handles errors in automated test execution. In the event of an error, the functional test cases will likely fail and be reported as such, but the rest of the automated test transactions will be executed. Maintains context through its orchestration server, which will maintain and context in the event of errors or failures in automated test cases or in SAP itself. Product’s orchestration server automates coordination and timing for each step (transaction), process (business process) and suite.

Has scheduling capabilities

Scheduling tool offers execution with dependencies

Contains Debugger

Allows for automatic synchronization between client and server

Built-in error handling capability

Built-in context recovery capability

Automatic timing for each step, process, and suite

2/5/07

Criteria

EXHIBIT 6.5 (Continued)

06_4782 10:43 AM Page 126

Retrieves and verifies data values in the SAP database as part of automated test cases. Includes automated functions to process the contents of SAP data fields and messages for use in functional test automation. Users do not have to use Excel or perform scripting to do this. Sucid function supports data-driven testing such as looping or iterative execution of an automated transaction using different data values pulled from an SAP table in each iteration without scripting (for example, to test a set of different material types using a single transaction). Sucid function also provides the ability to parameterize fields in an automated transaction without scripting (for example, to produce a valid, unique new PO number each time a new order is added), thereby reducing the need to create and manage external data files in the first place. Supports the ability to verify text fields, database values, and various objects in SAP. Supports regular expressions. The data generated or used during automated test execution is stored together with all other test outputs in a relational database.

Database verification and data acquisition

Provides Excel-based function (i.e., TRIM, MID, etc.) to clean up captured text

Data-driven tests (i.e., pulls data from spreadsheets, external sources, etc.)

Allows for verification points (objects, database values, text)

Tool offers regular expressions (i.e., text character matching)

Capabilities for creating external data files

10:43 AM

(Continues)

Stores all test assets as data in a relational database.

Summary reports are included with the product.

User can query test results according to a variety of parameters, such as the date/time, the person who ran the test, the name of the test run, the associated SAP transport, the associated requirement(s), etc.

2/5/07

All test assets stored as data in relational database

User-defined data filtering on all views

VII. Tool Data

06_4782 Page 127

127

128 Automates discovery during transaction capture and variable chaining during transaction execution between transactions that comprise an SAP business process.

Allows data seeding and data correlation

Captures all screen text such as SAP error or confirmation messages, and this text can have functional test cases attached to check the content of this screen text without requiring any scripting. Supports a variety of data access methods for data-driven automated tests, such as sequential, random, etc.

Captures screen text (i.e., status bar messages)

Provides playback with multiple data access methods (i.e., random)

Offers SAP security testing either by one or more specific user IDs, or by security profiles; when testing by security profiles, all users that are associated with a security profile are tested.

Allows SAP roles-based testing

Can automatically test to see if users who are supposed to be able to run designated transactions are, in fact, able to run them (positive testing); and can also automatically test to see if users who are not supposed to be able to run designated transactions are, in fact, not able to run them (negative testing).

Offers security but not down to the individual test asset component.

User and group security and permissions for each test asset component

VIII. Tool Security

SAP data fields and messages incorporated into automated tests already carry variable types.

Allows variable declaration

10:43 AM

Handles variable chaining during test execution without requiring any scripting or other intervention from the user; for example, if a new order is added in one step of a business process, the product will automatically pass the new PO number created for the new order into the next transaction in the business process that executes to update the order that has just been created.

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.5 (Continued)

06_4782 Page 128

No.

SAP Corporation has formally certified the tool

Vendors does not offer a certification program.

Sucid reports log test results per iteration. Yes.

Results logs show status for each row of data (iteration)

Results logs include date and time stamp

Sucid reports automatically generate and store test results from all automated test executions, including date/time stamp, identifying information on test(s) executed, identity of person executing the test, review/authorization status of test results, screenshots of executed screens from test run, response times, and SAP messages returned per screen, etc.

Creates automatic test results files (test logs)

(Continues)

Sucid reports automatically produce viewable and archived results reports in HTML format. These reports can be saved in other formats as well.

Results logs can be saved in different formats (i.e., HTML, .doc)

Automatically generates archivable electronic documents that include all data related to each executed test (date/time stamp, test results, screenshots, etc.); these documents can be saved and viewed to prove exactly what has been tested and what the results of those tests were.

Yes.

Results logs store screen captures

XI. Test Reporting and Review

Vendor provides onsite training services to customer’s users.

Vendor offers certification examination in test tool

10:43 AM

Vendor offers test tools training

2/5/07

X. Training

Vendor provides updates via website.

Vendor offers Web-based patches, downloads to upgrade tool

IX. Vendor Support

06_4782 Page 129

129

130 User can query test results according to a variety of parameters, such as the date/time, the person who ran the test, the name of the test run, the associated SAP transport, the associated requirement(s), etc.

User-defined query and reporting or charting capability

Does not support this capability. Supports association of test cases with requirements and with transports for traceability. No. Language and platform independent. Scripting is almost entirely eliminated, so language is not an issue. Product’s architecture makes it platform independent.

User-extensible classes, actions, and functions

User can extend interface with unlimited new attribute fields

Allows many-to-one and one-to-many requirements traceability

Supports full indirection for all test processes and data file names

Language and platform independent

Reprinted with permission from Sucid Corporation.

Does not offer this capability. Does not require classes, actions, and functions.

Export to text capability for all test assets

Does not output or produce text in narrative form. It does communicate in clear English the actual vs. expected results for functional, load, and security testing. But the text is interspersed with screenshots and is not presented in narrative form.

English-narrative documentation produced automatically from test processes

10:43 AM

Summary reports are included with the product.

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.5 (Continued)

06_4782 Page 130

Users may define the application map directly from the specification or prototype and develop their tests before the software is delivered. No code is written or generated for a test, and during execution no code is exposed or debugged. Certify has native support for Web, mainframe, .Net, Java, VB, and XML. It also supports other platforms through its API, including Siebel.

Tests can be developed concurrently with software development

No script coding required

Tool supports recording of non-SAP applications

(Continues)

Any identified changes can be made globally to all affected test assets.

Automated global changes for object changes and deletions

I. Transaction Capture and Playback

Comments/Responses

10:43 AM

Criteria

Tool(s) Name: Certify Tool Evaluator: Linda Hayes Vendor Name: Worksoft, Inc. Vendor Website: www.worksoft.com Date of Evaluation: 02/01/06 Tool Offerings: Test management, automation, and reporting solution for SAP and other platforms.

2/5/07

Test Tool Evaluation Matrix

EXHIBIT 6.6 Test Tool Evaluation Matrix (Vendor: Worksoft, Inc.)

06_4782 Page 131

131

132 Yes, using the step ignore option. Certify supports object- and analog-level actions; however, the user interface is always presented at the object level. All Certify test assets are stored in a database repository, including all objects, test processes, test data, and rest results. No coding is used but user can set global or local timeouts. Certify can be extended by adding classes and actions for userdefined functions or custom controls.User-defined functions can be accessed from any test or component. Certify offers both class action and keyword functionality Certify allows screens to be captured either on demand, by default, or based on results. Certify does not use scripts and captured screens need not be maintained. All screens and tests are described and maintained in a database and can be automatically updated globally when changes are made. Yes.

Has repository for managing the properties of recorded objects

Think times can be added without changing programming code

Test tool allows for creation of user-defined functions

Test tool offers keyword-driven tests

Has interactive captured screen of captured/recorded process

If tool offers captured/recorded screen, user can modify script logic through the captured screen

Allows renaming of labels for captured fields

Yes, using Open API.

Allows RFCs to be called

Has analog and object recording capabilities

No coding or scripting is required. Custom extensions may be added using tool or language of choice.

Common scripting language (i.e., VB)

10:43 AM

Produces automatic optional steps

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.6 (Continued)

06_4782 Page 132

No.

Vendor offers library of prerecorded SAP scripts/processes with tool

Yes.

Vendor offers floating licenses

No.

Integrates with Solution Manager

If tool integrates with solution manager, capabilities exist to execute recorded scripts from Solution Manager

(Continues)

Certify supports repositories using MSDE, SQL Server, or Oracle. No.

Stores test assets in MSDE, SQL Server, or Oracle

V. Tool Integration

Thick client but managed from central server for automation updates.

Tool is Web-based or requires desktop GUI installation (fat or thin client installation)?

IV. Tool Installation

Allows toolbar customizations

Certify allows an unlimited number of user-defined fields to be added using multiple object types such as text fields, checkboxes, combo boxes, etc. These fields may be required or optional and are available for custom filter, queries, and reports.

SAP Versions V4.X, 5.X.

Supports different versions of SAP (i.e., SAP GUI, Citrix, Netweaver, and Portals)

10:43 AM

III. Tool Maintenance

Open support for any SAP add-on or third-party bolt-ons.

Compatible with SAP bolt-ons (i.e., BW, SRM, APO, C-folders, CRM, etc.)

2/5/07

II. SAP Supported Versions, Applications

Stopwatches may be added but are not needed because Certify automatically times every step, process, and session.

Allows adding of start and stopwatches

06_4782 Page 133

133

134 Certify is an integrated management and automation solution. Certify manages multiple versions of applications and test assets and provides impact analysis and automated updates for tool for changes between versions. Integrates with SAP GUI Scripting. Yes. The Certify Open API allows functions to be written using the tool or language of choice.

Integrates with test management tool

If integration with test management tool exists, does it offer version-control capabilities? Or integrate with third-party version control?

Integrates with eCATT

Integrates with test tools other than eCATT

Open API to integrate with other tools, languages

Every step in Certify offers on pass/on fail options to control the test flow based on execution results and specify the correct log response. The Certify execution dashboard allows users to perform singlestep execution, skip steps, set breakpoints, monitor variable or recordsets, capture the screen, or abort the session. Any application change is automatically mapped to all affected test assets for instant impact analysis. Certify provides a rich set of error handling and recovery options to control execution workflow based on results. No. Certify integrates with Windows Task Scheduler. User can define execution dependencies based on runtime results.

Decision-making options for each test step on pass or fail

Execution control allows single-step, breakpoints, screen captures, variable and data monitoring

Automated impact analysis for application changes

Capabilities to run unattended and skip failed iterations

Runs scripts in background and foreground mode

Has scheduling capabilities

Scheduling tool offers execution with dependencies

10:43 AM

VI. Tool Execution

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.6 (Continued)

06_4782 Page 134

Every Certify step is automatically synchronized with the application playback speed. Certify provides a rich set of error logging and handling options to assure unattended execution can continue after errors. Certify provides an automated recovery system that can restore context to a known state and continue execution after a loss of context. No stopwatches are required, timing is automatically measures at every level of execution.

Allows for automatic synchronization between client and server

Built-in error handling capability

Built-in context recovery capability

Automatic timing for each step, process, and suite

135

All Certify test assets are stored as data. There are no script files for test cases. Only user-defined classes or actions require coding and only once per class and action. Certify enables direct verification or acquisition of data stored within a database. Test data may be created or modified within Excel as desired, then stored in Certify repository. Certify allows recordsets to be defined and stored within the database, or imported from Excel or CSV files. Users simply link recordsets to test processes and all file handling and looping is automatically provided.

All test assets stored as data in relational database

Database verification and data acquisition

Provides Excel-based function (i.e., TRIM, MID, etc.) to clean up captured text

Data-driven tests (i.e., pulls data from spreadsheets, external sources, etc.)

(Continues)

Every test asset view can be customized as to columns, column order, sort order. and filtering based on standard or custom criteria.

10:43 AM

User-defined data filtering on all views

2/5/07

VII. Tool Data

Execution dashboard provides step-level execution, breakpoints, variable and recordset watch windows. Also allows screen captures on demand as well as skipping steps.

Contains Debugger

06_4782 Page 135

136 Certify can capture data during runtime and write it to the repository. Any repository data can be exported as needed to external text or spreadsheets. All Certify data variables are stored in the repository and are available from any test for input, verification, or output. Text, number, date. Text or bitmap screen captures Data access is sequential.

Tool offers regular expressions (i.e., text character matching)

Capabilities for creating external data files

Allows data seeding and data correlation

Allows variable declaration

Captures screen text (i.e., status bar messages)

Provides playback with multiple data access methods (i.e., random)

Yes.

Allows SAP roles-based testing

No. This is in process.

SAP Corporation has formally certified the tool

X. Training

Yes.

Vendor offers web-based patches, downloads to upgrade tool

IX. Vendor Support

Access to every test component can be controlled by project, user, and group as to read/write/execute permission.

User and group security and permissions for each test asset component

10:43 AM

VIII. Tool Security

Any step can verify any object. Yes. Certify also provides rich set of predefined verification criteria such as starts with, contains, does not contain, etc.

Allows for verification points (objects, database values, text)

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.6 (Continued)

06_4782 Page 136

Yes.

Vendor offers certification examination in test tool

Every test data item can be filtered, sorted, or queried and all test assets can be included for reporting purposes. Each Certify step is automatically expressed as narrative description that can be modified within the database and produced in a documentation-style report or exported to an external file. Every Certify test asset can be exported to an external file. The Certify Open API allows user-extensible classes, actions, and functions. Certify allows an unlimited number of user-defined fields to be added to the repository.

Creates automatic test results files (test logs)

User-defined query and reporting or charting capability

English-narrative documentation produced automatically from test processes

Export to text capability for all test assets

User-extensible classes, actions, and functions

User can extend interface with unlimited new attribute fields

(Continues)

Reporting output is available to user-defined formats. Execution logs show step-level detail with both actual and expected results as well as screen capture if desired and elapsed time end to end at each level.

Results log can be saved in different formats (i.e., HTML, .doc)

Execution logs show each step for each data row. Date and time are available for every step, process, and session.

Results log include date and time stamp

10:43 AM

Results log show status for each row of data (iteration)

Screen captures are optional and may be defined and stored at ant step. Screens are stored in the database and may be saved to an external file.

Results logs store screen captures

2/5/07

XI. Test Reporting and Review

Yes.

Vendor offers test tools training

06_4782 Page 137

137

138 Certify requirements can be linked to multiple test processes and vice versa. All Certify test processes and data files can be called using variable names to allow full indirection. Certify had native support for Web, .NET, Java, VB, mainframe, and XML. Other platforms can be added using the Certify Open API. A single Certify test process can seamlessly span multiple applications and platforms within a single execution session.

Allows many-to-one and one-to-many requirements traceability

Supports full indirection for all test processes and data file names

Language and platform independent

10:43 AM

Reprinted with permission from Worksoft, Inc.

Comments/Responses

2/5/07

Criteria

EXHIBIT 6.6 (Continued)

06_4782 Page 138

Note: LISA does NOT test a Windows UI or Thick Client portion of an SAP application (such as the UI of an applet or VBUI, etc.) Nor do we test an internal workflow of SAP, such as testing ABAP calls within R/3, etc.

I. Transaction Capture and Playback (and Overview of LISA)

(Continues)

LISA’s support of all the above technologies covers unit, functional, regression, load, integration and performance testing in a single solution, with test cases usable across that entire lifecycle.

LISA does directly test an entire business workflow the Browser-based and/or portal integration aspects of SAP applications, especially as the company’s apps move to a more distributed, SOA (Services-Oriented Architecture) environment with NetWeaver and multiple J2EE servers (WebLogic, others) housing Web Services, JDBC/SQL databases, RMI, Messaging/JMS/MQ, File systems, and integration points with other major enterprise apps.

Comments

10:43 AM

Criteria

Tool(s) Name: iTKO LISA Complete SOA Test Platform Tool Evaluator: Jason English, iTKO Inc. Vendor Name: iTKO, Inc. Vendor Website: www.itko.com Date of Evaluation: 08/15/06 Tool Type: SOA testing tool for no-code functional, regression, integration, load and performance testing of Web Services (WSDL/SOAP), Messaging Layers (JMS/MQ) and web interfaces, as well as Java/J2EE, databases and other middle tier components. Ideal for testing SAP NetWeaver portals and distributed services-based architectures and components.

2/5/07

Test Tool Evaluation Matrix

EXHIBIT 6.7 Test Tool Evaluation Matrix (Vendor: iTKO Inc.)

06_4782 Page 139

139

140 Comments Have LISA point to a service, and by reflection the software will dynamically incorporate all the available references and data within the test object. If the object changes, LISA can update those references dynamically. The only case when you will need to make a change is if a method signature type has changed. Since LISA’s model is declarative and not strict-UI based like many acceptance testing tools, changes in the user interface will not automatically break the test case. This refers to the ability for the tool to inform the user of all tests that are affected by a change to an application object. Otherwise the user must search all tests and manually locate all references. We don’t do this out of the box. But if we were part of a total ALM Test Management solution, we would be able to give that impact analysis. This refers to the ability to create an automated test without having the application available to record against. This is a necessary feature for agile development approaches that require tests to be developed before the code. Yes, this is our encouraged way for test development. With this process, developers can help jump-start QA by supplying their unit tests, and QA then can build proper business scenarios, and test a deployed application that has no user interface. Tool allows point-and-click test creation and staging automation without the need to write, generate, or maintain programming code. LISA’s declarative test case model produces XML files that can be stored in any development environment tool, and managed through LISA’s interface. No need to rewrite or transfer test code when moving through unit, functional, regression, load, and performance test phases. We do offer a kit to extend LISA to test new technologies and LISA would be able to support no code authoring to exercise those extensions to the new technologies. iTKO LISA is an ideal tool when you are testing an integrated environment that includes SAP in the larger context of an enterprise architecture. LISA supports most of the standards of Web Services/SOAP and JEE, which in turn means LISA can uniquely

Automated global changes for object changes and deletions

Automated impact analysis for application changes

Tests can be developed concurrently with software development

No script coding required

Tool supports recording of non-SAP applications

2/5/07

Criteria

EXHIBIT 6.7 (Continued)

06_4782 10:43 AM Page 140

Remote calls for most components such as RMI, EJB are enabled right out of the box. LISA will automatically “normal” pass/fail conditions for each test step to the next step in the workflow or a Failure node. From there, additional options can be applied based on assertions the tester makes. Additionally, developers can quickly define any test workflow as a “Custom Process” that would pre-load with any optional steps desired depending on the business need. LISA does not perform analog testing, except for Swing (Java) apps that require some analog recording to be captured. Capture of Web-based test cases is done via browser simulation and complete HTTP, XML, JMS and other data stream and object interception by LISA, so Web apps and portal components are abstracted into testable business properties.

Allows RFCs to be called

Produces automatic optional steps

Has analog and object recording capabilities

141

After the script is recorded the tester can add think times and delays to each test step without having to insert/change existing programming code. We add them automatically (user defined), and think times can be changed to different values on a per node basis with point-and-click ease, or imported dynamically from a load profiler.

Think times can be added without programming or code changes

(Continues)

LISA offers a server side repository for team management and storage of test cases and test suites that may scheduled or run as functional, regression and load tests. LISA tests are all stored as XML files, locally on the client computer, or attached to any existing groupware or team collaboration software (We do this on purpose so teams can use their development process tools of choice).

Has repository for managing the properties of recorded objects

10:43 AM

For robot (Analog) type recording, we support Swing clients only. All other services, the objects are fully simulated (Digital) and abstracted into dynamic data properties.

LISA is a Java application, and it talks to most enterprise components natively. However you can extend testability of applications with LISA Extension API, with Java, or if you really need to script, through beanshell (java script).

2/5/07

Common scripting language (i.e., VB)

support testing of multiple non-SAP applications that might be leveraged under SAP NetWeaver, or within J2EE Application Servers (WebLogic, JBoss, Websphere, etc.).

06_4782 Page 141

142 Comments User-defined functions can be accessed from any test or component, and you can also capture browser tests through this method. LISA renders “keyword-driven” testing obsolete—you can directly create real workflows in a point-and-click manner in LISA and these are actual tests that teams can label however they want. There is not a separate process of abstracting granular test coding into keyword libraries needed. Test procedures can also be defined as “business processes” in LISA, which lets QA/Business teams test in terms of processes instead of root-level technologies. LISA has a powerful Interactive Test Run (ITR) function that lets you step through each Web UI or middle-tier component in the test and see both what was seen if it is a UI step, or the exact data that was relayed from several different views according to the user’s technical requirements. This is extremely powerful for problem solving, allowing developers to see both the end result and the underlying cause of each step in a test case. Remember, we don’t directly test thick client UIs . . . For our existing Web client you could take the same test and recapture the page with the replay utility, which can catch the new field. You can swap any captured value or value name in a test step by typing in another value or attaching a dynamic value to it. I don’t understand—you can pace or start and end a LISA test case at any scheduled time or interval. Not out of the box. We would offer LISA extensions to rapidly build “solution nodes” specifically for SAP functions that customers would have a good use for.

Test tool allows for creation of userdefined functions

Test tool offers keyword-driven tests

Has interactive captured screen of captured/recorded process

If tool offers captured/recorded screen, user can modify script logic through the captured screen

Allows renaming of labels for captured fields

Allows adding of start and stopwatches

Vendor offers library of prerecorded SAP scripts/processes with tool

2/5/07

Criteria

EXHIBIT 6.7 (Continued)

06_4782 10:43 AM Page 142

Vendor offers floating licenses

Tool installation is web-based or required desktop GUI installation (fat or thin client installation)?

IV. Tool Installation

Allows toolbar customizations

(Continues)

Bear in mind that there are no licenses required on the test target server; if LISA can reach it via remote invocation (In-Container testing), LAN/WAN, or the Internet, it can test it.

Yes, LISA user licenses can be purchased per seat or in volume as fixed or floating licenses. Virtual Users for Load Testing are always floating and “pooled” to the company on the LISA Server.

LISA Server Application: Clientless app that LISA test cases can leverage; it provides the test scheduling and virtual user load generation for the company’s testers using the LISA User Client or embedding command-line calls to LISA Server within their code.

LISA User Client: Fat (Java Swing-based) client runs on Windows/Linux/Unix/Mac platforms.

Downloadable, double-click install of LISA client application. No further “implementation” required to test multiple component types.

10:43 AM

N/A

For GUI testing, LISA only supports Browser-based UIs leveraging SAP, and SAP NetWeaver.

Supports SAP GUI, Citrix, and Portals

2/5/07

III. Tool Maintenance

N/A to core SAP modules.

Compatible with SAP bolt-ons (i.e., BW SRM, APO, C-folders, CRM, etc.)

II. SAP Supported Versions, Applications

06_4782 Page 143

143

144 LISA tests are stored as XML and can be easily attached to any development portal or ALM/SCM/RM/Issue tracking or groupware tools and file systems. Not direct integration, though simply store an XML file in SM, or create an automation process with LISA We can. You can call any running LISA Server to run and report off a test “heedlessly” via command line, in a build or other test script, or by a java method. Yes, currently we tightly integrate with MKS for test management (where LISA calls are created in MKS and vice versa). However, for any QA Center–style app, it is much easier to just attach data-rich LISA test cases and test runs as XML files, than attaching manual Word/Excel docs or manually coded test cases. Depends on test management solution used. (LISA is not a test management tool in this sense.)

Unknown. If eCATT is Java or can generate XML, we can extend LISA to use eCATT. Very strong JUnit, nUnit, Ant build integration (run LISA from those scripts or run the scripts from LISA and affect pass/fail results). Other test tools depending upon technology. Yes, LISA offers a highly extensible test framework for every test phase and target technology, and we publish our LISA Extension API with our integration kit.

Stores test assets in MSDE, SQL Server, or Oracle

Integrates with Solution Manager

If tool integrates with Solution Manager, capabilities exist to execute recorded scripts from Solution Manager

Integrates with test management tool

If test management tool exists, does it offer version control capabilities? Or integrate with third-party tool for version control?

Integrates with eCATT

Integrates with test tools other than eCATT

Open API to integrate with other tools, languages

V. Tool Integration

Comments

2/5/07

Criteria

EXHIBIT 6.7 (Continued)

06_4782 10:43 AM Page 144

Yes, when working with a component you are directly executing that component in a point-and-click way with LISA’s call panel. Via our Interactive Test Runner, you can step through tests, view data at any level, and access all test information without looking or working with code. A “fail/continue” step method is available. Foreground = using the ITR to step-through a case, mode. Background = staged tests that run at timed intervals/quantities. Yes, through our test scheduler, which is part of the LISA Server. Yes, you would use our scheduler, and set the tests up in a test suite to define order. Yes, we can record variables and states with our reporting. No compiler is needed. LISA offers automatic timing synchronization management, and can also be set to accept asynchronous inputs or a timing profile, but does not robot SAP application UIs. Yes. LISA is very strong at reproducing text context and configurations (for SOA, Java, Web applications, not client UIs); you can define a process for failure to execute to bring the application back to a start configuration state. Yes, we record those timings for response, which is part of our reporting framework.

Execution control allows single-step, breakpoints, screen captures, variable and data monitoring

Capabilities to run unattended and skip failed iterations

Runs scripts in background and foreground

Has scheduling capabilities

Scheduling tool offers execution with dependencies

Contains debugger

Allows for automatic synchronization between client and server

Built-in error handling capability

Built-in context recovery capability

Automatic timing for each step, process, and suite

2/5/07 10:43 AM

(Continues)

Yes, we have point-and-click assertions and filters where you can direct LISA’s workflow in either a binary (pass/fail) or conditional way (Boolean).

Decision-making options for each test step on pass or fail

VI. Tool Execution

06_4782 Page 145

145

146 All tests are stored as XML in a file system. We don’t currently have an internal process to find values against a series of files. Test data can be stored in spreadsheets, xml, or central databases (which would enable data filtering). You can view and acquire data by directly entering the value wanted or by writing SQL statements against the database, but you won’t need to write code to compare values. Yes. Yes, very much. Yes. Yes, can be entered as part of any assertion. Yes. Yes for multiple SOA components, Web Services, databases, etc. No for SAP apps/modules, which would require extensions. Yes. LISA can capture values from Web UIs only, and save them out to files.

User-defined data filtering on all views

All test assets stored as data in relational database

Database verification and data acquisition

Provides Excel-based function (i.e., TRIM, MID, etc.) to clean up captured text

Data-driven tests (i.e., pulls data from spreadsheets, external sources, etc.)

Allows for verification points (objects, database values, text)

Tool offers regular expressions (i.e., text character matching)

Capabilities for creating external data files

Allows data seeding and data correlation

Allows variable declaration

Captures screen text (i.e., status bar messages)

VII. Tool Data

Comments

2/5/07

Criteria

EXHIBIT 6.7 (Continued)

06_4782 10:43 AM Page 146

No.

Allows SAP roles-based testing

In process.

SAP Corporation has formally certified the tool

Not currently—iTKO directly refers service practitioners.

Vendor offers certification examination in test tool

Yes (Web UI views). Yes. Yes. Yes, you can store values in a database, too.

Results logs store screen captures

Results logs show status for each row of data (iteration)

Results logs include date and time stamp

Results logs can be saved in different formats (i.e., HTML, .doc)

XI. Test Reporting and Review

Yes.

Vendor offers test tools training

X. Training

Yes.

Vendor offers web-based patches, downloads to upgrade tool

(Continues)

10:43 AM

IX. Vendor Support

We don’t have a repository—uses existing development tools/groupware and permissions.

User and group security and permissions for each test asset component

Yes.

2/5/07

VIII. Tool Security

Provides playback with multiple data access methods (i.e., random)

06_4782 Page 147

147

148 Not available (LISA’s easy to use, but it is the toolkit you use). No, not a test requirements manager (use RM of choice). Not automatically. You could apply this as an approach to building a workflow. Yes. LISA will run on almost any desktop client or Java-1.4+ compliant server (Linux/Unix, Windows, OSX, Solaris, HPUX, etc.)

User can extend interface with unlimited new attribute fields

Allows many-to-one and one-to-many requirements traceability

Supports full indirection for all test processes and data file names

Is the tool Language and platform independent?

Reprinted with permission from iTKO Inc.

Data and reporting information can be (but you wouldn’t want to leave LISA).

No need for a “narrative” as you can see it as a self-explanatory graphical workflow. BPEL integration and output coming October 2006.

English-narrative documentation produced automatically from test processes Yes.

Yes.

User-defined query and reporting or charting capability

User-extensible classes, actions and functions

Yes, at a very high level of detail (you can see the state of the result at every layer).

Creates automatic test results files (test logs)

10:43 AM

Export to text capability for all test assets

Comments

2/5/07

Criteria

EXHIBIT 6.7 (Continued)

06_4782 Page 148

Automated global changes for object changes and deletions

I. Transaction Capture and Playback

Criteria

149

(Continues)

The object map is specifically designed to address the pain of script maintainability. The object map is automatically populated when a script is recorded, or you can manually add objects to the map. The map provides the testing team a single source to update when objects in the AUT are changed. By changing the map, all scripts that reference that object will use the updated object information. Furthermore RFT has a feature called the “Object Map Find and Modify utility, which enables you to find all objects that match criteria such as property names, property values, or various custom filters. Actions can then be taken on the matching objects to Add Property, Remove Property, Change Value, and/or Change Weight. Modifications can be applied to objects one at a time or globally.

Comments

10:43 AM

Rational Functional Tester’s extension for SAP enables the SAP users to perform automated functional and regression testing of their SAP (6.2/6.4 GUI Client and 4.6/4.7 Server) applications providing them with all capabilities that RFT has to offer such as Verification Points, Data Driven Testing, Dynamic Data Validation, Object Maps, Script Assure and more. It makes SAP testing simple, easy and flexible by creating scripts that are robust and resilient to application changes.

Tool(s) Name: IBM Rational Functional Tester Tool Evaluator(s): Swathi Rao and Shinoj Zacharias Vendor Name: IBM Vendor Website: www.ibm.com/rational Date of Evaluation: 09/11/06 Tool Type: (i.e., SAP Record/Playback Scripting Test Tool for regression, string, integration, smoke testing) (Please fill your own. This is an example.)

2/5/07

Test Tool Evaluation Matrix

EXHIBIT 6.8 Test Tool Evaluation Matrix (Vendor: IBM)

06_4782 Page 149

150

No script coding required

Tests can be developed concurrently with software development

Yes, no scripting is required. RFT is used to record manual interactions with the application under test. Recording is, however, transparent to the user, in the sense that, while the user performs the test, RFT will on the fly make a script of all the user

If the user is looking to perform keyword-driven testing with RFT, then RFT integrates with an open-source keyword-driven testing framework called “SAFS” (Software Automation Framework Support), which does just that.

RFT does provide the ability to code the entire script without having any objects in the map. What this means is that we can hand code the entire script without using the recorder. So, as long as the tester/developer knows the hierarchy and properties of the object in the application, he/she can proceed with creating test scripts.

RFT uses object-oriented technology to identify objects by their internal object properties and not by screen coordinates or object names. So if the location or text or any other specific property of the object changes from build to build, RFT can still find it and proceed with playback without breaking our scripts. “ScriptAssure” is the technology that makes the test scripts immune to object property changes between software builds.

The map provides the testing team a single source to update when objects in the AUT are changed. By changing the map, all scripts that reference that object will use the updated object information.

10:43 AM

Automated impact analysis for application changes

When the object is renamed in the ObjectMap, it will immediately reflect in all scripts that refer to that object. However, if objects are deleted from the Sctript’s Object Map, RFT does not automatically delete all the references to that object in the script. Instead it will show up compilation errors at the instances where the deleted object has been used in the script.

Comments

2/5/07

Criteria

EXHIBIT 6.8 (Continued)

06_4782 Page 150

Rational Functional Tester is available in two scripting languages—Java and VB.NET. The Java language is supported using the Eclipse (open-source) IDE while the VB.NET language is supported using the Visual Studio .NET IDE. RFT provides developers and more advanced testers the ability to work the script code directly using the choice of VB.NET or Java language based on their skill set. The users can leverage the power of the language themselves to perform complex tasks. No, currently this feature is not provided by RFT. RFT does not support analog recording capability. However using scripting (handcoding), the same task can be achieved.

Common scripting language (i.e., VB)

Allows RFCs to be called

Produces automatic optional steps

Has analog and object recording capabilities

151

The think times can be added while recording using the Script Support functions provided on the recording toolbar. Even after the recording is completed, the user can insert steps, such as think times, by clicking on the “Insert Recording” option in the Script menu which brings up the recorder, thus providing the Script Support functions.

Think times can be added without programming or code changes

(Continues)

The Functional Tester test object map lists the test objects in the application-undertest (AUT). The object map is a static, hierarchical representation of the objects in the application where attributes, properties of captured or recorded objects are stored and maintained.

Has repository for managing the properties of recorded objects

10:43 AM

RFT does provide the ability to record on all SAP objects, with the ability to recognize these objects independent of the object location.

Yes. RFT is an automated functional and regression testing tool for testing Java, .NET, web-based, Siebel, SAP, and terminal-based applications.

2/5/07

Tool supports recording of non-SAP applications

activities. You can also add Verification Points, Data Driven tests, Comments, Log Messages, Timers etc while recording. Once the scripts are recorded, the test scripts can be used for unattended execution.

06_4782 Page 151

152 No. Currently this feature is not supported. No. Currently this feature is not supported. No. Currently this feature is not supported.

Yes. The object names can be changed from the script explorer or test object map. Yes. Timers can be included in the script while recording by using the Script Support Function from the recording toolbar. No.

Test tool offers keyword-driven tests

Has interactive captured screen of captured/recorded process

If tool offers captured/recorded screen, user can modify script logic through the captured screen

Allows renaming of labels for captured fields

Allows adding of start and stopwatches

Vendor offers library of prerecorded SAP scripts/processes with tool

Compatible with SAP bolt-ons (i.e., BW, SRM, APO, C-folders, CRM, etc.)

Yes. As long as these packages can be accessed via the SAP GUI, we support testing them.

Yes. RFT is supported in two industry standard languages, Java and VB.NET, which provides users with the ability to create their own functions that can be accessed from within and across test scripts.

Test tool allows for creation of user-defined functions

10:43 AM

II. SAP Supported Versions, Applications

Comments

2/5/07

Criteria

EXHIBIT 6.8 (Continued)

06_4782 Page 152

Yes.

Vendor offers floating licenses

No. N/A

Yes. RFT integrates with two of Rational’s test management solutions (i.e., with the new-generation tool CQTM and with the legacy tool TestManager). RFT can be directly integrated with ClearCase, the Configuration Management solution from Rational. The test management solutions that RFT integrates with (CQTM/TM) also integrate with ClearCase.

Integrates with Solution Manager

If tool integrates with Solution Manager, capabilities exist to execute recorded scripts from Solution Manager

Integrates with test management tool

If test management tool exists, does it offer version-control capabilities? Or integrate with third-party tool for version control?

(Continues)

Test assets are stored as flat files.

Stores test assets in MSDE, SQL Server, or Oracle

V. Tool Integration

The tool installation is not web-based. However the tool uses the IBM Installation Manager, which can point to a local/remote location for installing the product.

Yes.

10:43 AM

Tool installation is web-based or required desktop GUI installation (fat or thin client installation)?

IV. Tool Installation

Allows toolbar customizations

SAP Server 4.6 and 4.7 and Gui Client 6.2 and 6.4.

2/5/07

III. Tool Maintenance

Supports SAP GUI, Citrix, and Portals

06_4782 Page 153

153

154 No. No. No.

Integrates with eCATT

Integrates with test tools other than eCATT

Open API to integrate with other tools, languages

Yes. RFT uses the IDE’s debugger to debug every step recorded, providing the visibility into the value of data at any point.

Execution control allows single-step, breakpoints, screen captures, variable and data monitoring

RFT does provide the ability to run the scripts unattended. However, it will stop at the first failure encountered. Scripts can only be run in the foreground mode. No. The test management solution RFT integrates with does provide the capability to add dependencies to test execution. However, no scheduling capabilities are provided.

Capabilities to run unattended and skip failed iterations

Runs scripts in background and foreground mode

Has scheduling capabilities

Scheduling tool offers execution with dependencies

Screen snapshots are automatically generated only for fatal errors. For capturing screen snapshots otherwise, RFT exposes APIs that can be included in the script.

No.

Decision-making options for each test step on pass or fail

10:43 AM

VI. Tool Execution

Comments

2/5/07

Criteria

EXHIBIT 6.8 (Continued)

06_4782 Page 154

Yes. RFT provides the capability to wait for the existence of an object (with timeout specified) that will enable the user to provide timing synchronization. Yes. Meaningful exceptions are thrown at failures. No. No.

Allows for automatic synchronization between client and server

Built-in error handling capability

Built-in context recovery capability

Automatic timing for each step, process and suite

No. No. This feature is coming shortly. No.

Yes. RFT allows data-driven testing by using data from an external file, a datapool (a .csv file, a tab delimited or comma separated file), as input to a test. Yes. The object provides two types of VPs—Object Data Verification Point and Object Properties Verification Point—to capture an object’s state and data during a test.

All test assets stored as data in relational database

Database verification and data acquisition

Provides Excel-based function (i.e., TRIM, MID, etc.) to clean up captured text

Data-driven tests (i.e., pulls data from spreadsheets, external sources, etc.)

Allows for verification points (objects, database values, text)

(Continues)

No.

10:43 AM

User-defined data filtering on all views

2/5/07

VII. Tool Data

RFT uses the IDE’s debugger.

Contains Debugger

06_4782 Page 155

155

156 Using the power of scripting languages (Java or VB.NET) this capability can be achieved. Yes. SAP transactions can be strung together within the same script or automated test case. Yes. Yes.

Capabilities for creating external data files

Allows data seeding and data correlation

Allows variable declaration

Captures screen text (i.e., status bar messages)

Vendor offers Web-based patches, downloads to upgrade tool

Yes.

No. If RFT integrated with Test Manager, then Test Manager allows limited role-based testing capabilities. But RFT as tool itself does not allow role-based testing.

Allows SAP roles-based testing

IX. Vendor Support

The test management solution RFT integrates with provides this capability.

User and group security and permissions for each test asset component

VIII. Tool Security

Yes, supports both random and sequential access.

Yes.

Tool offers regular expressions (i.e., text character matching)

10:43 AM

Provides playback with multiple data access methods (i.e., random)

Comments

2/5/07

Criteria

EXHIBIT 6.8 (Continued)

06_4782 Page 156

Yes.

Vendor offers certification examination in test tool

Yes. RFT logs show every significant action performed on each iteration. Yes. Yes. It can be stored as Text files, HTML Files, TPTP Log Files and TestManager Log files. RFT provides an option to automatically generate logs. However, if Verification Point has passed, it does not show both actual and expected result. The Verification Point Comparator shows actual and expected values only when the VP has failed. For passed VPs, one can view only the expected result. No.

Results log show status for each row of data (iteration)

Results log include date and time stamp

Results log can be saved in different formats (i.e., HTML, .doc)

Creates automatic test results files (test logs)

User-defined query and reporting or charting capability

(Continues)

Screen snapshots are automatically generated only for fatal errors. For capturing screen snapshots otherwise, RFT exposes APIs that can be used to include the snapshots in the log files.

Result logs store screen captures

10:43 AM

XI. Test Reporting and Review

Yes.

Vendor offers test tools training

No.

2/5/07

X. Training

SAP Corporation has formally certified the tool

06_4782 Page 157

157

158 Yes. For high-level actions, the documentation (comments) are generated automatically. Yes. The test assets such as the Test Project with its Scripts and the related assets as well as the Datapools can be exported. However, they cannot be reused by any “other” testing tool. No. No. This is possible when RFT is integrated with its test management tool (CQTM or TM). Yes. No. It generates either Java or .NET scripts, which cannot be used interchangeably. However, RFT supports cross-platform capabilities. This means scripts generated on a Windows platform can be executed on a Linux platform and vice-versa.

English-narrative documentation produced automatically from test processes

Export to text capability for all test assets

User-extensible classes, actions, and functions

User can extend interface with unlimited new attribute fields

Allows many-to-one and one-to-many requirements traceability

Supports full indirection for all test processes and data file name

Is the tool language and platform independent?

10:43 AM

Reprinted with permission from IBM.

Comments

2/5/07

Criteria

EXHIBIT 6.8 (Continued)

06_4782 Page 158

06_4782

2/5/07

10:43 AM

Page 159

Test Tool Review and Usage

159

The tools were evaluated based on several factors for the following categories: ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■

Transaction capture and playback SAP-supported versions Applications Tool maintenance Tool installation Tool integration Tool execution Tool data Tool security Vendor support Training Test reporting and review

The criteria used for evaluating the vendors assists in the acquisition of SAP-specific test tools. The vendors in these exhibits provided self-evaluations for the capabilities of their test tools. Exhibit 6.1 provides descriptions for the criteria factors used to evaluate the vendors. Exhibits 6.2 through 6.8 are the actual evaluation of all vendors, listed in alphabetical order.

SOURCES OF AUTOMATION In an SAP environment, various sources provide information for automating processes to support string, integration, regression, or performance testing. Automation within an SAP includes automation of processes within R/3, SAP bolt-ons (i.e., SRM, SAP CRM Sales Internet System, Employee Self-Service—ESS, Cross Applications Timesheets— CATS, Advanced Planning Optimization—APO) and other external applications interfacing directly with R/3. The business process master list (BPML) provides a listing of SAP transaction codes, either custom or out-of-the-box, that are in scope for particular SAP releases. From the BPML, standalone SAP transaction codes can be identified for potential SAP automation. Automated test cases for individual SAP transaction codes can be strung together to form larger end-to-end scenarios. For instance, the BPML

06_4782

2/5/07

160

10:43 AM

Page 160

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

may show that these transactions are in scope VA01 and VL01, which are used for sales order creation and deliveries; these transaction codes can be automated independently of each other as standalone automated test cases and then strung together to deliver a newly created sales order. Additional individual transaction codes from the BPML can be automated until the entire order fulfillment process is automated, including shipping, invoicing, credit checks, and so on. Other sources of information that assist in the automation of test cases include business process procedures (BPPs), which contain training information for transaction codes; flow process diagrams, which can demonstrate how a process flows end-to-end, including multiple transaction codes; and SAP roles. Functional and technical requirements and specifications also provide further information for automating test cases containing transactions codes, reports, and workflow. Documented test cases with expected results can be drafted from the functional and technical requirements. The test team can leverage the documented test cases to construct automated test cases that include SAP verification points. Verification points are system outputs that can be inspected; for instance, when an SAP sales order is created, SAP generates a status bar message indicating that a sales order has been created that may appear generically as “Sales Order Number XX has been generated.” Other verification points include checking financial figures on reports, ensuring that workflow messages are routed to the correct user, and quantities within a table.

TYPE OF TESTS SUITABLE AND CRITERIA FOR TEST AUTOMATION Test tools can support playback and execution of test cases for the following testing efforts: ■ ■ ■ ■ ■

Smoke (testing of vital components for each official build) Regression Scenario Integration Performance

Test cases can be automated when the environment is stable and not subject to frequent system changes. Automation efforts on a test

06_4782

2/5/07

10:43 AM

Page 161

Test Tool Review and Usage

161

environment that experiences frequent system changes cause much rework since automated test cases would have to be changed and amended to meet a new system baseline or configuration changes. As a rule of thumb, automation on an object should not be attempted until the object has been demonstrated to first execute successfully manually. Furthermore it is not practical to attempt to automate all test cases. Many SAP projects attempt limited test case automation for initial system releases since the system is unstable and undergoing many changes due to identified defects, requirement changes, or requirement misinterpretations. Automation efforts are typically enhanced and expanded for production support when system changes are introduced due to OSS (On-line Service System) notes, new functionality being requested, defects, system patches, system upgrades, and so on. For regression testing, sunny- and rainy-day scenarios are automated and played back to ensure that the introduction of new system changes does not affect previously working system functionality. Scenario testing is the equivalent of string testing since processes are tested within a single enterprise area (or module). Scenario testing is the precursor to integration testing and is usually the follow-on test for unit testing. Automation attempts during the scenario-testing phase may be hindered by the poor resolution of defects during unit testing, since that would cause the system to become unstable for scenario testing. Limited automation can be attempted during scenario testing toward the end of the scenario test, when the system has demonstrated that it can successfully execute processes manually. Integration testing for an initial SAP implementation may consist of three or more iterations whereby the system is tested first during iteration one for the most important processes, and all processes in iteration two; iteration three is used to address any or all defects that remain outstanding from iterations one and two. Automated test cases can assist and expedite the integration-testing cycle, since many processes may have to be retested when defects are discovered and resolved. Performance, load, volume, and stress testing in general require multiple end users to execute simultaneous keystrokes to generate system traffic and identify the application’s bottlenecks, degradation points, and so on. Depending on the size and scope of the SAP implementation, a performance test may require that hundreds or thousands of end users execute processes simultaneously while system response times are manually collected. Organizing and scheduling a

06_4782

2/5/07

162

10:43 AM

Page 162

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

performance test consisting of synchronized end users spread out in multiple locations may prove difficult, if not impossible, in particular when a performance test has to be repeated multiple times over a short time window. Test case automation is highly suggested and appropriate for a performance test since it reduces many of the problems associated with coordinating, scheduling, and training multiple end users for a performance test. Test case automation allows end users to be emulated as virtual users, which reduces the headcount needed for a manual performance test and also instantly collects results and produces graphs and charts at the conclusion of a performance test. Exhibit 6.9 provides criteria that help test engineers and test managers decide which business processes are suitable for test automation. EXHIBIT 6.9 Criteria to Determine Whether Automation is Necessary Criteria Test script requires the verification and validation of multiple attributes, objects, and components. Test script needs to be executed with external data that resides in a database system or spreadsheet. Business process to be automated has a finite number of input values and fields and was constructed with an orthogonal array to provide coverage for all its permutations. Test script will be used for security testing. Test script will be used for either one or all of these: stress, volume, load, soak, performance testing (add a point for each type of test that it meets). Test script will be used for regression, functional testing. Test script will be used to validate values, calculations, etc., displayed on a customized report, or online report. Test script will be repeated many times or is highly repetitive (i.e., test script will need to be executed multiple times on different software releases or builds). Test script will be played back with multiple sets of data (i.e., data-driven scripts, parameterized scripts). The application under test has a stable environment with repeatable conditions. Test script will be used for integration testing and requires correlation across multiple business processes.

Criteria Pertain to: Automation Automation

Automation Automation

Automation Automation Automation

Automation Automation Automation Automation

06_4782

2/5/07

10:43 AM

Page 163

163

Test Tool Review and Usage

EXHIBIT 6.9 (Continued) Criteria Test script will be used to kick off or launch programs that already have batch scheduled jobs (i.e., interface programs, conversion programs). Test script will be used for negative testing. Test script will be used for analog testing or bitmap testing. The application under test where the test script will be recorded is constantly changing. Test script has test steps to display objects, figures, GUIs, etc. that the automation tool does not recognize. Test script will be used for intuitive testing or lateral testing. (i.e., recording of test script is predicated on intuitive knowledge of the application or depends on error guessing). Test script will be used to test how objects are displayed or captured on a screen and is not testing the application’s functionality (i.e., testing to see how objects are displayed via an emulated desktop session which varies from desktop to desktop based on screen coordinates, size of screen, pixels, etc.). Test script will be used for usability testing. Test script and business process requires recording of business processes or test steps with more than two distinct recording tools. Testing process has physical requirements such as the use of hardware equipment or mechanical devices (i.e., scanning serial numbers with bar-coding machine). Test script will be executed only once. Test script will only be used on a single release or build of the application and NOT during subsequent releases or builds of the software. Test script will be used to automate a business process that does not yield predictable or static results. Test scripts need to be executed immediately (i.e., the test scripts need to be executed within 30 hours or less). Test script is for a process that is highly complex on an application that has many custom controls, and once automated the test script will be extremely difficult to maintain, modify, or reuse.

Criteria Pertain to:

Manual Manual Manual Manual Manual

Manual

Manual Manual Manual

Manual

Manual Manual Manual

Manual

In addition to the criteria provided above, it is highly beneficial to automate test cases that maximize the return on investment (ROI) for the efforts spent on automating a process, and the following factors must be evaluated and considered before automation for a test scenario is attempted:

06_4782

2/5/07

164 ■ ■

■ ■ ■

10:43 AM

Page 164

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Frequency. Number of times that a given test scenario is expected to be executed manually within a 12-month period. Execution duration. Based on historical evidence or (resource expertise), how long does it take to execute the test scenario manually, including recording results for the test run? Preparation. How much time does it take to plan or rehearse a test scenario that will be executed manually? Stability. Based on historical data or functional specs, how many times has the underlying process been modified or reconfigured? Number of assigned testers. How many individuals or resources are dedicated to execute manually a test scenario that cuts across multiple SAP modules?

With the criteria and factors in the following, a hypothetical scenario can be created to objectively determine whether it is practical to automate a test scenario: Hypothetical test scenario: Order-to-close scenario Total time to execute manually including recording test results: 25 hours. Frequency: Executed five times during the year to support major system releases. Stability: Process is subject to few minor modifications per year (two minor modifications). Fairly static. Preparation: On average, 15 hours are spent manually rehearsing the test scenario before it is fully executed. Number of assigned testers: Three testers (having expertise in project systems [PS] module, finance [FI], and sales and distribution [SD] module).

Given these metrics, it is possible to estimate with some margin of error that between preparation and execution of the manual test case (including manually recording test results) approximately 200 man-hours per year are spent executing the order-to-cash scenario. This is not including time needed to manually modify the documentation for the test case when the order-to-cash scenario is subject to configuration changes, or the time needed to coordinate the multiple resources that are necessary for executing the test scenario. With an automated framework in place, one can review the following statistics and metrics needed to automate the order-to-cash scenario and whether doing so is cost effective:

06_4782

2/5/07

10:43 AM

Page 165

Test Tool Review and Usage

165

Total hours needed to automate test case (including functional support): 80 hours. Time needed to execute process with automated test tools (including automatic test results [logs] generated by automated test tools): Two hours. Number of resources needed to execute automated test case: One at most, since automated test case can be scheduled to run unattended. Preparation time needed to execute automated test case: Five.

Under the hypothetical scenario for automating from scratch and executing the automated test case for order-to-cash, it is estimated that for the first year it would take 115 man-hours to execute the automated test case. For subsequent years it would take 35 manhours to execute the automated test since the automated test case has already been constructed, whereas executing the process manually is a fixed number of man-hours: 200 man-hours per year, subject to the availability of the testing resources and level of expertise. This analysis points objectively and based on certain assumption to a case in favor of automation. With a similar analysis, projects can employ an objective approach for automating scenarios.

AUTOMATION TRANSCENDS TESTING Test tools are helpful for testing, but they also serve other purposes that are not related to validation of system, functional, and technical requirements. Test tools offer benefits that can expedite and facilitate SAP activities. Test tools in an SAP implementation can bring the following benefits which help to increase the ROI: ■



SAP ad-hoc data loads. Test tools can automate processes and SAP transaction codes for infrequent or one-time data loads for events such as training, environment setup, and so on. For instance, an automated test case can be created to load thousands of SAP pay scales within hours. Other examples of data loads include loading up materials, vendors, and wage types. Prototypes/demonstrations for end users. As part of an informal user acceptance test or client demonstration of the SAP system,

06_4782

2/5/07

166





10:43 AM

Page 166

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

processes can be automated and played back repeatedly to show expected system functionality in demos or prototypes. Verification of objects. Security settings, SAP variants, and other system settings can be inspected and verified with test tools. An example includes automating processes that retrieve the data values for the variants for all advanced business application programming (ABAP) programs and extract the information into a spreadsheet for inspection as opposed to verifying each ABAP program variant manually and one at a time. Repetitive nontesting tasks. In SAP, there are many repetitive tasks to set up the system, initializing processes that are not related to a specific testing effort. For instance, the security team has to create, update, and maintain users. These roles and tasks can be performed with automated test tools.

METHODS OF AUTOMATION In SAP projects, the most common method employed for automation is to provide the resource knowledgeable on the test tools with documented test cases to create and design automated test cases. Frequently, the functional team leaders will not release resources to support automation efforts because the functional resources are time constrained. Automation of processes requires support and knowledge from different project members, since documented test cases are usually not kept up to date with the system configuration, which diminishes their value. Exhibit 6.10 offers techniques and methods for automating test cases.

SIGNS OF TEST AUTOMATION FAILURE SAP projects that have purchased commercial test tools and are suffering from the conditions below will probably either need to scrap all test automation efforts, outsource the automation efforts, or dedicate more project resources (i.e., subject matter experts [SMEs], functional analysis, configuration team members, and test engineers) to the automation activities:

Description

Functional expert (usually remote) captures in the form of video (with voice) a business process and sends e-mail to automation expert

A functional expert shows the automation expert, through an emulated session, how to record a business process

A functional expert sits next to the test tool expert and provides instructions for how to navigate a process within SAP

The test tool expert follows detailed written documentation to document a process

Videotaped and business process

Shared session

Sitting side by side

Detailed documentation

BPPs, Test Cases, Test Scripts

Allows test tool expert to work independently; allows test tool expert to map failures to test scripts

Correct SAP navigation is ensured including workarounds, overcomes outdated artifacts

Can be interactive with functional expert over the phone, allows remote interaction

Solves time differences, can be played back without interacting against SAP system

Advantages

Time consuming to produce detailed documentation; documentation becomes obsolete if not managed through version control

Takes functional expert away from primary tasks; to refine the script additional expertise may be needed

Connection speeds (lags), Software installation, Security (?)

Speed during playback, no interactive forum for Q&A

Disadvantages

10:43 AM

Webex, Citrix, Netmeeting, PCAnywhere

Lotus Cam

Tools

2/5/07

Method

EXHIBIT 6.10 Automation Approaches

06_4782 Page 167

167

06_4782

2/5/07

168 ■ ■















10:43 AM

Page 168

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Test engineers are automating processes without any established guidelines and criteria. Test managers or senior managers are unaware or confused as to what was previously automated and how it pertains to the project’s future testing cycles. Test engineers or experts with the automated test tools are hired or contracted but do not have access to SMEs, business analysts (BAs), or configuration team members to construct automated test cases. Project has woefully inadequate documentation for test cases, BPPs, or flow process diagrams, which hinders the ability of the test engineer to understand the project’s business process and business rules; thus he or she cannot develop suitable test cases or verify testable conditions. Test engineers construct and design automated test cases in an environment that is subject to frequent changes, and the configuration or development changes are not clearly communicated within the project, which causes automated test cases to fail during playback and much rework. The project does not have dedicated or expert resources for test case automation and rather uses “fillers” or individuals from other teams who have different primary job responsibilities for test case automation. The project implements an automated test case strategy only with individuals who have recently come out of a 1- to 4-day training class for test case automation and the project does not have a test case automation mentor or expert for the recently trained resources. Test tools have outdated versions and have not been upgraded, and there is no dedicated project resource for maintaining the test tools. Project members lose faith in the test tool because it takes too much time to automate a test case when automation is initially attempted and complain that executing a manual task over a 10hour time window is much quicker and faster than attempting to automate the same process over five business days. Initial automation efforts are extremely time consuming, and ROI for automated test tools is not realized until the automated test cases are executed frequently for future testing cycles.

06_4782

2/5/07

10:43 AM

Page 169

Test Tool Review and Usage

169

The preceding list represents manifestations that the test case automation approach is headed for a collision course, which usually causes many projects to abandon any automation attempts in favor of manual testing. When these signs appear, the test manager or project manager may have to evaluate the need for automating test cases or whether a third-party provider can provide automation much more efficiently.

TEST MANAGEMENT TOOLS Test management tools are used for test planning, test repository, test design, test execution, reporting defects, reporting, audit trails, and tracking defects. Some test management tools can be integrated with other commercial tools from third-party companies for purposes such as version control and requirements management. Within a test management test tool, a testable requirement can be linked or associated to a test case, and after the test case is executed the requirement is automatically updated with either a pass or failure status. Test management tools can also be used to store and execute both automated test cases and manual test cases. Furthermore, test results, test logs, and defects can also be stored within a test management tool that includes a date/time stamp and audit trails. Some test management tools also offer e-mail workflow capabilities on reported and closed defects. Arguably, the biggest advantages of test management tools is that they offer a single repository where all test artifacts can be safely and securely stored as opposed to storing information on test cases in disparate and disconnected spreadsheets, shared drives, or e-mails. Because test management tools are a single repository, data can be collected from a single source, which increases the transparency of the testing effort and provides greater visibility into the testing progress. For example, test management tools permissions and authorizations can be granted to project members to generate real-time graphs, charts, and reports to analyze how many test cases have been designed and executed, how many defects with a priority of “1” remain outstanding, or how many requirements have been covered. The ability to generate real-time testing metrics from a test management tool

06_4782

2/5/07

170

10:43 AM

Page 170

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

allows senior management and test managers to make informed decisions for supporting a go/no-go decision or an exit-criteria decision based on objective information. Another benefit of test management tools is that they offer a history of planning and estimating costs for future testing cycles. Many SAP projects struggle with questions such as how much of the budget should be allocated to testing activities, how many billable hours should be spent on testing, and creating a project schedule with accurate dates and activity duration times for the planning and execution of test cases. With a test management tool, data such as how long it actually took to design, plan, and execute a test case or resolve a defect can be easily extracted and used as a baseline for planning and estimating the costs and number of resources needed for a future a testing cycle. This information cannot be easily extracted from disparate spreadsheets or e-mails, particularly for projects that experience employee, consultant, or subcontractor turnover after a major testing cycle is completed.

07_4782

2/5/07

11:09 AM

Page 171

CHAPTER

7

Quality Assurance Standards he concept of quality assurance (QA) in an SAP environment conjures the perception that QA is the equivalent of configuring the quality management (QM) module. While QM configuration helps end users perform tasks such as lot and source inspections, the act of configuring QM does not introduce QA to the design, configuration, or development of the SAP solution. Projects that adhere to recognized methodologies and philosophies such as Six Sigma, Capability Maturity Model (CMM), and Institute of Electrical and Electronics Engineers (IEEE) standards are most likely to successfully enforce QA standards. The QA team can perform the following activities within an SAP implementation:

T









■ ■ ■

Ensure that documentation for deliverables such as business process procedures (BPPs), flow process diagrams, and technical and functional specifications are in accordance with documented standards. Ensure that all mandatory information is documented and approvals are granted before an object is transported into the production environment. Ensure that project members assigned to testing tasks and execution of test cases adhere to the procedures and methodology outlined in the test plan and test strategy (i.e., using appropriate templates, version control for objects). Spot inspection for project deliverables and document scorecards for measuring and evaluating compliance of project teams to documented QA standards. Document lessons learned. Train and mentor project members on QA standards. Ensure that the project’s test cases and requirements are aligned with the project’s scope. 171

07_4782

2/5/07

172 ■ ■

11:09 AM

Page 172

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Enforce QA standards and audit the deliverables and work products subject to QA standards. Provide quality gates to substantiate the exit criteria for a testing cycle.

The QA team can help to create, define, and enforce quality standards. Standards govern how deliverables and work products are created, peer reviewed, and accepted. QA standards are needed to design test cases, plan the testing efforts, define the testing criteria, report test results, and resolve reported defects. Depending on a project’s budget, the line blurs between the QA team and the test team, and individuals assigned to the test team must also fulfill the expected role(s) of QA team members. In theory, the concept of QA relates to preventing defects, whereas testing relates to detecting defects. The assumption is hereby made that test team members fulfill both testing and QA roles. Defined and implemented QA standards must be fit for purpose and get the buy-in from project managers and/or team leaders and be supported by the QA charter; otherwise, they risk becoming obsolete and difficult to enforce. QA representatives must document training materials and provide training for project members who are expected to adhere to QA standards.

TEST PLAN AND STRATEGY The test plan and strategy is the documentation that addresses the details, procedures, and approach for testing. The test plan and test strategy explain the “how” and “what” for testing. The test manager defines the test strategy in the project preparation phase and finalizes the documentation for the test strategy in the blueprint phase. The test manager should obtain buy-in from various project stakeholders, such as the project manager, the configuration manager, development manager, and so on, to document the test strategy, as the test strategy needs to be realistic for the project given its deadlines, available resources, and budget allocation. The SAP ASAP methodology within SAP’s Solution Manager platform offers an accelerator white paper for documenting the test strategy. IBM’s Ascendant guide methodology for implementing SAP also provides a template for a test strategy.

07_4782

2/5/07

11:09 AM

Page 173

Quality Assurance Standards

173

The white paper test strategy from Solution Manager defines the expected roles for the various testing efforts and the recommended test system for each testing effort, and also provides definitions for each SAP testing phase from unit testing through system testing, which helps the project members standardize their testing terms and nomenclature. (Note: The following website provides a glossary of testing terms: www.erp.navy.mil/util/browse.asp?glossaryletter=p.) The generic test strategies from either Solution Manager or Ascendant can be customized to meet a project’s specific testing needs. For instance, a test strategy may need to be modified to include a framework for test case automation or outsourcing of test cases for off-shore execution. The test strategy can also address what documents should be retained at the end of each testing phase. Specific test strategies can be developed to provide more granularities for other testing efforts such as stress, security, and user acceptance testing. SAP’s Solution Manager also provides test strategy templates for planning a stress test. The test plan addresses specifically the “how” of testing. Elements of a test plan include: ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■

Available resources (i.e., machines, test lab, equipment, test tools, etc.) Test schedule (timeline of activities) Test calendar (expected sequence and execution of test cases) Test criteria (entrance/exit) How test results will be reported Criteria for automating processes (if test tools are in place) Resolution of testing defects Issues, risks, and assumptions for testing Specific roles and responsibilities for each tester and descriptions for each testing role Objectives of the test Scope of the test An organizational chart for individuals participating in the testing efforts Test case templates Test readiness review criteria How the test environment will be constructed and baselined and how data will be populated in the test environment How interfaced data will be verified through the legacy systems

07_4782

2/5/07

11:09 AM

174

Page 174

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

IBM’s Ascendant SAP implementation guide offers a sample test plan. Typically, test plans are applied and adhered to, depending on the project scope for integration testing consisting of multiple iterations. Both test strategies and test plans need to be signed off and approved by the project’s stakeholders. It is recommended that the configuration, development, integration, and Basis team leaders and the project manager sign off the test plan and test strategy. The test plan and strategy need to be stored in a version-controlled repository that is accessible to all testing participants. Prior to the start of the testing cycles, the test manager should present the contents from the test strategy and test plan in a kickoff presentation to the individuals participating in and responsible for conducting testing tasks. The test plan and test strategy are living documents. However, they should be amended only under a controlled process that includes approvals. The QA team helps in ensuring that the project members adhere to the guidelines and procedures documented within the test plan and test strategy.

TEST CRITERIA The software testing criteria identify critical conditions and measures necessary to start exit from or suspend testing for a designated testing effort. Testing criteria are defined and documented within the test plan and are constructed with the input from several project stakeholders. Testing criteria can be customized for each project, and they should be aligned with the company’s policies and goals. For instance, a company that follows Six Sigma (6σ) as part of its statistical process control may impose that at least 90 percent of all test cases are successfully executed and show a “pass” status for the first iteration of the integration test. In contrast, another company that does not have a strong culture of quality or total quality management (TQM) may be satisfied with a success rate of 70 percent for all test cases for the first iteration of the integration test. A corporation will need to establish testing criteria that are fit for purpose and suitable to the constraints, budget, and resources available to the project team implementing SAP. Formal testing efforts that are subject to signoffs, peer reviews, and audits are typically likely to include testing criteria. The main types of testing criteria are entrance and exit criteria. According to the Certified Software Tester Common Body of Knowl-

07_4782

2/5/07

11:09 AM

Page 175

Quality Assurance Standards

175

edge, the following definitions are provided for entrance and exit criteria: Entrance Criteria/Exit Criteria—the criteria that must be met prior to moving to the next level of testing, or into production, and how to realistically enforce this or minimally how to reduce risk to testing organization when external pressure (from other organizations) causes you to move to the next level without meeting exit/entrance criteria.

The following are examples of test criteria for scenario testing: Entrance Criteria ■ ■ ■ ■

All developed code must be unit tested. Unit testing must be completed and signed off by the configuration team. At least 85 percent of all unit test cases must have passed. No defects with a priority level of 1 from unit testing remain unresolved. Any outstanding defects from unit testing are documented and have workarounds.

Exit Criteria ■ ■

■ ■ ■

All high-priority defects of level 1 or 2 must be fixed and tested. At least 80 percent of all testable requirements must have been verified. All requirements with a high level of importance to the business must have been shown to work successfully. At least 90 percent of all test cases must have passed successfully. A trend of decreasing defects on a weekly basis. Outstanding defects of low or medium priority must have documented workarounds and be signed off as acceptable risks by the subject matter experts.

In addition to entrance and exit testing criteria, other criteria include: ■ ■

Release—as part of the decision to move something into the Production environment. Suspension—to terminate/halt testing.

07_4782

2/5/07

11:09 AM

Page 176

176 ■ ■ ■

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Success—the level of achievement or passes needed to consider a testing effort successful. Pass/fail. Resumption—criteria to continue testing if it has been suspended. Testing will not recommence until the software reaches these criteria.

TEST READINESS REVIEW The test readiness review (TRR) is a formal review gate point to verify that the system is ready for formal testing and approval. A TRR consists of a criterion checklist whereby criteria factors are evaluated to assess how well prepared or ready the system and the project members are to initiate a testing effort. As an analogy, a TRR is similar to the checklist that a pilot must check off before departing from the gate to ensure that certain conditions are met before the plane departs. In similar fashion, the project needs to ensure that certain conditions are met before starting a testing effort. The contents and criteria for TRRs are evaluated and addressed in the presence of witnesses from the configuration team, test team, configuration management team, development team, Basis teams, the project manager, and the project management operations (PMO) office. A TRR is recommended for major testing efforts such as integration testing, but in theory can be held prior to any testing effort (i.e., scenario testing, user acceptance testing, etc.). The criteria from a TRR should be assigned in advanced to project members who must be ready to state at the TRR meeting whether the criteria have been fulfilled or, if they have not been fulfilled, why this is the case and the consequences to the project schedule and the testing effort. A TRR review should be held at least 72 hours prior to the start of a testing effort to allow project members to respond or address criteria factors that have not been met. Exhibit 7.1 shows a sample TRR to be evaluated prior to the start of an integration test. The TRR can be amended or customized to meet project-specific needs. Note that each condition within Exhibit 7.1 can have a different importance factor. Depending on the condition factor not met from the TRR, the project may decide that it needs to delay or postpone the start of the integration test. The TRR

5. Has all required data been created and is it ready for integration testing? Has all common data for integration testing been defined (i.e., chart of accounts, materials, etc.)? Has all necessary test data been loaded into the test environment (i.e., data loaded from CATT scripts, client copy, interfaces, etc.)?

4. Have all testing facilities (i.e., rooms, test environment, desktops, test lab)

3. Has a testing schedule and a testing calendar been approved and posted?

2. Have all testing procedures been documented? Including procedures for reporting test results and resolving and closing defects?

1. Has the integration test plan been approved?

Yes—Met

Not Met

N/A

(Continues)

Comments

11:09 AM

Condition

2/5/07

Test Readiness Review Checklist: Date: Witnesses: Approvals:

EXHIBIT 7.1 Suggested TRR Checklist

07_4782 Page 177

177

178

12. Have all batch scheduled jobs for ABAP programs been identified?

11. Have sample sets of representative data been identified to test interfaces?

10. Have all interfaces with other systems (i.e., legacy systems) been identified for integration testing?

9. Is there a contact list for all testing participants? Have all testing participants been confirmed, including participants from offsites? Have the contact names & telephone numbers of support personnel been provided to test team?

8. Have all necessary SAP IDs/passwords & roles been established and verified? Have testers been given access to shared drives, LANs, Portals, test tools, test management repositories, etc.?

Not Met

N/A

Comments

11:09 AM

7. Have all integration test cases been completed? Approved?

Yes—Met

2/5/07

6. Are there any defects outstanding from a prior testing effort (i.e., scenario testing)?

Condition

07_4782 Page 178

20. Has the exit and suspension criteria been defined for integration testing?

19. Have daily or weekly meetings been established to review and debrief test results?

18. Has the test team held a kick-off presentation to review all testing procedures (including roles for testing participants)?

17. Have all workflow roles/profiles been set up?

16. Is there a change control board in place to evaluate proposed system changes or scope changes arising from test results, or test defects?

Not Met

N/A

(Continues)

Comments

11:09 AM

15. Have all procedures been established for transporting objects into test environment? Have individuals responsible for signing off transports been identified?

14. Is the target test environment ready for testing? Hardware components ready? All patches, OSS notes been applied? Does it have connectivity to the legacy systems? Does it have the latest and greatest configuration changes?

Yes—Met

2/5/07

13. Have all testable requirements been fully traced to the correct test cases?

Condition

07_4782 Page 179

179

180

24. Have All test tools been installed?

Not Met

N/A

Comments

11:09 AM

23. Have arrangements been made with the appropriate support personnel (i.e., Basis) to provide extended support beyond normal hours of operation?

22. Has the system been baselined?

Yes—Met

2/5/07

21. Do testing issues have workarounds? If so, have the workarounds been documented?

Condition

07_4782 Page 180

07_4782

2/5/07

11:09 AM

Page 181

Quality Assurance Standards

181

should be signed off by the witnesses attending the review when the conditions have been met.1

TEST CASE TEMPLATE Testing requires test case templates that indicate the process that needs to be executed (commonly referred to as test steps) and the expected system behavior or system response after the test steps are executed (commonly referred to as expected test results). For SAP testing, test cases should contain, at a minimum, the following information and fields: ■ ■

■ ■ ■ ■ ■ ■ ■

SAP roles needed to execute a process(es) (i.e., warehouse clerk). Data value(s) or data variants needed to execute the test (i.e., enter data value “1000” for company code, test process with multiple “wage types” such as straight time, holiday time, overtime, vacation time, etc.). Requirement met or fulfilled from executing the test case. Any preconditions that are needed for executing a test case (i.e., a requisition is needed before generating a purchase order). Description of the process to be tested. Approval fields (i.e., for signoffs). Test steps to be performed. Expected test results. Actual test results (i.e., pass or failure).

The SAP ASAP methodology offers test case templates and instructions for creating and documenting integration test scenarios as seen in Exhibit 7.2. The unit testing template from the SAP ASAP methodology embedded as an accelerator consists of the template for BPPs that can be enhanced with test conditions. The template and its fields shown in Exhibit 7.2 can be customized within a test management tool or reproduced in spreadsheets or text editors. When test cases are customized within test management tools, they can offer a 1

The Defense Contract Management Agency (DCMA) offers a detailed TRR online at the Web site http://guidebook.dcma.mil/226/2245%20TRR.pdf for those readers interested in seeing another sample TRR.

182 RUN DATE:

EXPECTED RESULTS:

COMMENTS AND NOTES

RUN NO:

DESCRIPTION:

DESCRIPTION

STATUS:

BUSINESS CASE:

DATA OBJECT VALUE/CODE

11:09 AM

SETUP DATA

OWNER:

SCENARIO

2/5/07

Integration Test Scenario No. _____

EXHIBIT 7.2 Test Case Template from SAP ASAP Methodology

07_4782 Page 182

Date: ____ / ____ / ____

INPUT DATA/SPECIAL INFORMATION

Approval: ______________________________________

Comments:

99

7

6

5

4

3

TRANS. CODE

OUTPUT DATA/RESULT

TESTER/ TEAM

OK/ ERROR

11:09 AM

2

1

BUSINESS PROCESS STEPS/ BPP NUMBER

2/5/07

0

No.

TRANSACTIONAL STEPS

07_4782 Page 183

183

07_4782

2/5/07

184

11:09 AM

Page 184

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

series of benefits that cannot be easily obtained from disconnected spreadsheets or text editors such as automatic reporting, metrics, traceability to requirements, audit trails, and transparency for up to date progress on test execution. Individual test cases should be produced for each unique process that needs to be tested.

APPLICABILITY AND LIMITATIONS OF QA STANDARDS Implemented or proposed QA standards must align with the project’s scope, budget, and deadlines. QA standards must first be applied to critical to quality processes or deliverables that can compromise or undermine the project’s success when their quality degrades. The work and deliverables for the QA team must be governed by an approved charter, and project members must receive sufficient training to adhere to QA standards. The QA team holds the promise of offering consistency for project deliverables and work products, but they also have limitations. QA standards can enforce a defined process and methodology and contribute to project consistency. It is possible that QA standards ensure consistency across project deliverables; however, the project may be consistently producing the wrong deliverables. For example, a QA standard may enforce that a particular mandatory field for a template be populated, but the QA team member may not know whether the contents for the mandatory field are populated correctly and can enforce only that the field is not left empty. QA representatives can enforce a process or methodology for producing deliverables (i.e., no field is left empty, all executed test steps must be accompanied by screenshot printouts, forms need to be filled out before an object is transported into production, etc.) but may not have sufficient subject matter expertise or knowledge to discern whether the deliverables are suitable for the project needs or whether the contents for a deliverable are fit for purpose. In many projects the role of the QA team members is obscure and poorly understood, which causes many project members in particular from the configuration and development teams to question the need for QA representatives. Other project members view the audits and spot inspecting from QA representatives as a nuisance, since QA members may not have sufficient knowledge of SAP processes or SAP

07_4782

2/5/07

11:09 AM

Page 185

Quality Assurance Standards

185

navigation. Under these situations, the role and/or existence of the QA team become tenuous and difficult to justify. Some projects may also implement QA standards that are difficult to enforce or standards that, if enforced, can have unintended effects. For instance, a QA standard that states that the execution of all test cases must have a success ratio of 95 percent or higher for firsttime execution on the surface appears highly desirable but also can cause the test team members to hide or “sweep under the carpet” potential defects because the testers are apprehensive to report defects that can compromise a quality metric of 95 percent success ratio. QA managers must develop QA initiatives to support the project’s scope statement and must have the approval of a project committee. In order to roll out QA standards to project members, it is necessary to identify how the training will be provided for the standards, how standards will be enforced, how noncompliance with QA standards or deficiencies will be reported, and how corrective actions are applied to deliverables not conforming to QA standards.

07_4782

2/5/07

11:09 AM

Page 186

08_4782

2/5/07

11:11 AM

Page 187

CHAPTER

8

Assembling the QA/Test Team ny SAP project—whether it’s a brand-new implementation; is in production support; includes an upgrade; or requires adding new modules and/or bolts-ons, extending a rollout to a new division or company site—will need testing and quality assurance (QA) resources to conduct the required testing. Some of the most often asked questions related to SAP QA and testing resources are:

A ■ ■ ■ ■ ■ ■ ■ ■ ■ ■

When are they needed? For how long are they needed? Where will they come from (i.e., internal transfers or external hires)? What will they do? What are their roles and responsibilities? What skill sets do they need to have? How do we measure their success (i.e., how will testers’ performance be evaluated)? What are the criteria for doing so? Whose authority are they under (i.e., whom do they report to)? How will they interact with the other project teams (i.e., Basis, Development, Functional, etc.)? How much will they cost? How will testers’ performance be evaluated? What are the criteria for doing so?

Answers to these questions are not always (if at all) readily available in any SAP implementation methodology. Nevertheless, the inability to answer these questions often plagues the SAP project, which can lead to an unstable system in production. Many SAP projects assemble test and QA teams from resources taken away from their primary job responsibilities because they are underutilized. Identifying and screening the “right” testing and QA resources is often time consuming and could prove expensive when the “wrong” resource is 187

08_4782

2/5/07

188

11:11 AM

Page 188

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

introduced to the project. The test manager and project manager can recruit testing resources based on defined roles and skill sets that are suitable for planning different types of testing efforts with manual and automated test case execution. Many SAP projects assemble test and QA teams from resources taken away from their primary job responsibilities because at the time they might be underutilized in their current jobs, without giving much thought to the expertise, experience, and skill required to do the job right. The misconception still exists that anybody can do testing and not much special skill is required to accomplish effective SAP testing. Understanding the skills needed and identifying and screening the “right” testing and QA resources can be time consuming. Yet it is often more expensive to introduce the “wrong” resources to the project. The test manager and project manager need to recruit testing resources based on defined roles and skill sets that are suitable for the tasks at hand (i.e., understanding SAP implementations, strategizing, and planning for effective testing, and planning for different types of testing efforts with manual and automated test case execution). The SAP project manager needs to ensure that the project teams are constantly following and adhering to QA standards that enforce the principle of preventing errors while building a product with quality. Furthermore, the project will need dedicated testing resources that enforce the principle of error detection, which indicates that the system meets documented business requirements and service-level agreements (SLAs). Assembling a team that has a mixture of QA and testing skills is necessary for deploying, upgrading, implementing, and maintaining a SAP system.

QA AND TEST TEAM DIFFERENCES Although both QA and testing contribute to system quality, the terms are distinct and their philosophies may also vary. An SAP project may need both a QA team and a testing team or a team that combines the skills sets from both QA and testing. QA is a role that helps define and enforce the standards that will be used to configure and develop the system. In contrast, testing is the activity that ensures that the system meets the documented system requirements through detection,

08_4782

2/5/07

11:11 AM

Page 189

189

Assembling the QA/Test Team

which consists primarily of execution of test cases. Through system testing, errors are identified, retested, resolved, and closed. The decision to form both a QA and test team depends largely on project scope, budget, and schedule. Exhibit 8.1 provides examples of ways to differentiate the activities and roles of the QA and test teams. Projects in heavily regulated environments or subject to audits may need both QA and test team resources. However, projects with limited budget but with a consistent approach across all project teams for producing deliverables and work products may need only the presence of a testing team. Alternatively, a project may form a single integrated QA/test team whereby the team manager can implement QA standards and also manage the execution of test cases. In an agile

EXHIBIT 8.1 Sample Activities for a Quality Assurance and Test Team QA Activities:

Test Activities:

• Create standards for drafting test cases, requirements, developing diagrams, collecting information from workshops, naming conventions, etc. • Enforce standards • Develop functional test plan and test strategy • Develop test case template • Participate in requirements gathering workshop • Inspect technical and functional specifications • Inspect process flow diagrams • Inspect requirements • Raise risks • Participate in Change Control Board (CCB) • Approve transports • Develop forms for peer reviews • Create exit/entrance testing criteria • Develop defect resolution procedures • Develop unit, scenario, integration-testing procedures

• • • • • • • • • • • • • • • • • • • • •

Develop automated test cases Execute test cases (manual/automated) Support/Maintain test tools Customize test tools Support/Maintain test lab Inspect requirements Create/Maintain testing schedule Create requirements traceability matrix (RTM) Develop testing schedule Conduct system integration testing Conduct security testing Conduct performance testing Report defects Review documented test cases Help identify test data Particpate in Change Control Board (CCB) Generate test reports, charts, graphs Produce testing metrics Develop performance test plan Document lessons learned Hold testing kickoff meetings

08_4782

2/5/07

190

11:11 AM

Page 190

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

or low-budget SAP development environment it is perfectly fine and even recommended to treat the QA and testing responsibilities interchangeably, for example, making the testing team also responsible for QA activities, such as verifying, adhering to defined standards, and others as defined in Exhibit 8.1. Appendix A provides a list of activities for QA and test team members based on the SAP ASAP Implementation methodology.

WHEN SHOULD QA AND TEST TEAM MEMBERS BE BROUGHT ONTO THE PROJECT? SAP testing and QA resources are needed as soon as the project preparation phase is commenced for initial SAP implementations. For instance, based on SAP’s own ASAP methodology built within the Solution Manager platform, the following activities are identified for the project preparation phase: ■ ■

Determine quality standards Define testing strategies

For initial SAP implementations, the ASAP methodology indicates that a test strategy needs to be written and documented well in advance of holding workshops to identify business processes and business requirements. An experienced QA manager can document the SAP test strategy that meets the project’s objectives and testing expectations. During later phases of the SAP project, such as blueprint and realization, the need arises to bring more QA and testing resources to complete activities such as inspecting requirements, procuring and installing automated test tools, developing standards for creating business process flow diagrams, functional and technical specifications, creation of automated test cases, and the creation of a requirements traceability matrix (RTM). For other projects that have had a system in production, the time to bring QA and testing resources may be long overdue, if they don’t exist already. SAP production support teams have to constantly apply SAP system changes due to releases of patches, OSS notes, and enhancements from the vendor, which require testing. Furthermore, the production support team needs to ensure that consistent methods are

08_4782

2/5/07

11:11 AM

Page 191

Assembling the QA/Test Team

191

practiced for applying, tracking, controlling, and verifying system changes. The project manager may form an integrated QA/testing team to verify system transports, automate and maintain test cases, execute test cases, and conduct regression/performance testing. Without the formation of an integrated QA/test team, the project may experience risks in the production environment, since the functional and development teams may not have the methodology, bandwidth, expertise, or time to conduct regression testing. Other SAP projects that merit the immediate formation of either separate or integrated QA and test teams are projects experiencing upgrades, adding new modules, or including a new division or corporate unit. These project teams often need to conduct gap analysis to determine new system functionality; draft new requirements or modify existing requirements for security, performance, and system functionality; identify new system touch points; or include logic to take into account business rules and policies. The QA and test teams can review, inspect new requirements when new modules or bolt-ons are added to the existing implementation, conduct system testing, and ensure that the configuration and development team observe documented standards.

SKILL SETS FOR QA AND TEST TEAM The role of the SAP tester is to test and verify that the system meets system requirements from all aspects of security, performance, functionality, reporting, work flow, and programming. SAP testers need proficiency in various areas, including test planning, requirements evaluation, industry regulations, automated test tools, test management tools, test execution, test reporting, and functional navigation within SAP R/3 and SAP bolt-ons. Placing nonskilled testers or fillers in the test team can produce unreliable test results. Testers who do not understand the concept of module and data integration within an SAP system need to be trained and educated on those concepts. SAP testers need to have at a minimum the following background: ■ ■ ■

Formal testing skills and methodologies. Knowledge of SAP processes (modules and bolt-ons). Expertise in navigating SAP transactions.

08_4782

2/5/07

11:11 AM

192 ■ ■ ■ ■ ■ ■ ■ ■ ■

Page 192

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Knowledge of ABAP programming. Ability to interpret and understand test cases. Knowledge of the various types of SAP data. Industry-specific SAP solutions. Expertise with automated test tools. Ability to write Structured Query Language (SQL) statements. Programming skills such as Visual Basic. Ability to inspect requirements for completeness, quality, and testability and construct a RTM. Subject matter expertise.

Specific testing roles and skills sets for various testing positions are outlined in Exhibit 8.2. EXHIBIT 8.2 Test Program Roles Test Manager Responsibilities

Skills

• Liaison for interdepartmental interactions: Representative of the testing team • Recruiting, staff supervision, and staff mentoring and training • Test program budgeting and scheduling, i.e., test-staffing needs and effort estimations • Customer support interaction • Customer interaction, as applicable

• Understand company’s interviewing methodology, i.e., has taken the “hiring certificate.” (If none exists, insist a repeatable interviewing process is created) • Understand all HR policies related to management • Understands SAP testing process or methodology • Understands test-program concerns including test strategies, test environment and data management, trouble reporting and resolution, and test design and development • Understands manual testing techniques and automated testing best practices

• Test planning, including development • Understands application business of testing goals and strategy area, application requirements • Vender interaction • Understands different types of • Defines technologies to help improve technologies available to increase testing efforts, i.e., test-tool testing efficiency selection and introduction or in-house developed testing tools, etc.

08_4782

2/5/07

11:11 AM

Page 193

193

Assembling the QA/Test Team

• Cohesive integration of test and development activities

• Skilled at developing test goals, objectives, and strategy

• Acquisition of hardware and software • Familiar with different test tools, for test environment defect-tracking tools, and other test-support COTS tools which can enhance testing efforts (Ghost, VMWare, etc.) and their use • Test environment and test product configuration management.

• Good at all planning aspects, including people, facilities, and schedule

• Test-process definition, training and continual improvement • Use of metrics to support continual test-process improvement • Test-program oversight and progress tracking • Coordinating pre- and post-test meetings Test Lead Responsibilities

Skills

• Technical leadership for the test program, including test approach

• Understands application business area and application requirements

• Support for customer interface, recruiting, test-tool introduction, test planning, staff supervision, and cost and progress status reporting

• Familiar with test-program concerns including test-data management, trouble reporting and resolution, test design, and test development

• Verifying the quality of the • Expertise in a variety of technical requirements, including testability, skills including programming requirement definition, test design, languages, database technologies, test-script and test-data development, and computer operating test automation, test-environment systems configuration; test-script configuration management, and test execution • Interaction with test-tool vendor to identify best ways to leverage test tool on the project

• Familiar with different test tools, defect-tracking tools, and other COTS tools supporting the testing life cycle, and their use

(Continues)

08_4782

2/5/07

11:11 AM

Page 194

194

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBIT 8.2 (Continued) Responsibilities

Skills

• Staying current on latest test approaches and tools, and transferring this knowledge to test team • Conduct test-design and test-procedure walk-throughs and inspections • Implementing test-process improvements resulting from lessons learned and benefits surveys • Test Traceability Matrix (tracing the test-procedures to the test requirements) • Test-process implementation • Ensuring that test-product documentation is complete Usability Test Engineer Responsibilities

Skills

• Designs and develops usability testing • Proficient in designing test suites scenarios • Understanding usability issues • Administers usability testing process • Defines criteria for performing usability testing, analyzes results of testing sessions, presents results to development team

• Skilled in test facilitation

• Develops test-product documentation • Excellent interpersonal skills • Defines usability requirements, and • Proficient in GUI design standards interacts with customer to refine them • Participates in test-procedure walk-throughs Manual Test Engineer Responsibilities

Skills

• Designs and develops test procedures • Has good understanding of SAP and cases, with associated test data modules and design

08_4782

2/5/07

11:11 AM

Page 195

195

Assembling the QA/Test Team

• Manually executes the test procedures • Proficient in software testing • Attends test-procedure walk-throughs • Proficient in designing test suites • Conducts tests and prepares reports on test progress and regression

• Proficient in the business area of application under test • Proficient in testing techniques • Understands various testing phases

• Follows test standards

• Proficient in GUI design standards

Automated Test Engineer (Automator/Developer) Responsibilities

Skills

• Designs and develops test procedures • Good understanding of SAP modules and cases based upon requirements under test • Designs, develops and executes reusable and maintainable automated scripts • Uses capture/playback tools for GUI automation and/or develops test harnesses using a development of scripting language • Follows rest-design standards

• Proficient in software testing

• Proficient in designing test suites

• Conducts/attends test procedure walk-throughs

• Proficient in working with test tools

• Executes tests and prepares reports on test progress and regression

• Programming skills

• Attends test-tool user groups and • Proficient in GUI design standards related activities to remain abreast of test-tool capabilities Network Test Engineer Responsibilities

Skills

• Performs network, database, and middleware testing

• Network, database, and system administration skills

• Researches network, database, and middleware performance monitoring tools • Develops load and stress test designs, cases and procedures

• Expertise in a variety of technical skills, including programming languages, database technologies, and computer operating systems (Continues)

08_4782

2/5/07

11:11 AM

Page 196

196

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBIT 8.2 (Continued) Responsibilities

Skills

• Supports walk-throughs or inspections of load and stress test procedures • Implements performance monitoring tools on ongoing basis • Conducts load and stress test procedures

• Product evaluation and integration skills • Familiarity with network sniffers, and available tools for load and stress testing

Test Environment Specialist Responsibilities

Skills

• Responsible for installing test tool • Network, database and system and establishing test-tool environment administration skills • Responsible for creating and controlling test environment by using environment setup scripts

• Expertise in a variety of technical skills, including programming and scripting languages, database technologies, and computer operating systems

• Creates and maintains test database (adds/restores/deletes, etc.)

• Test-tool and database experience

• Maintains requirements hierarchy within test-tool environment

• Product evaluation and integration skills

Security Test Engineer Responsibilities

Skills

• Responsible for security testing of the application • Responsible for supporting the secure software development lifecycle

• Understands security testing techniques • Background in security • Security test tool experience • Understands security modeling techniques

Build Support, Test Library and Configuration Specialist Responsibilities

Skills

• Test-script change management

• Network, database, and system administration skills

08_4782

2/5/07

11:11 AM

Page 197

197

Assembling the QA/Test Team

• Test-script version control

• Expertise in a variety of technical skills including programming languages, database technologies, and computer operating systems

• Maintaining test-script reuse library • Creating the various test builds, in some cases

• Configuration-management tool expertise

• Verify that Development Build and Version Control standards are being met

• Test-tool experience

For projects that have significant investments in requirements management tools, automated test tools, version control software, defect tracking, and test management tools, testers who have demonstrated experience and/or vendor certification within these tools are recommended as members for the test team. Ideal test members are individuals who understand the SAP “to-be” system, the company’s policies and business rules, integration points, and legacy systems; can document and execute test cases; and have proficiency with automated test tools. SAP testers document, design, and peer review test cases with expected results and execute the test cases either manually or with automated test tools. Finding test team members who have functional SAP experience and technical experience with automated test tools may turn out to be a challenge. It is recommended that the test team consist of a mixture of individuals with both functional knowledge of SAP and knowledge of automated test tools in order to develop and maintain automated test cases. QA team members need to successfully implement, communicate, and enforce standards and document the test plans. QA team members do not necessarily need SAP functional knowledge or knowledge of automated test tools, but they do need to provide practices for building a system with quality and preventing defects. QA members need to define standards for conducting workshops, gathering requirements, inspecting requirements, documenting functional and technical specifications, and creating business process procedures (BPPs) and flow process diagrams. The QA team members must ensure that the project members from the configuration, test, and development teams understand and can follow the QA standards to

08_4782

2/5/07

198

11:11 AM

Page 198

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

avoid producing inconsistent work products. The QA team also plays a vital role in documenting the project’s lessons learned after each testing phase and monitoring the entrance and exit criteria for each testing effort.

DISTRIBUTION OF SKILLS ACROSS THE TEST TEAM An effective testing team consists of team members with a mixture of expertise, such as subject matter, technology, and testing techniques, plus a mixture of experience levels, that include junior, mid-level, and expert testers. Subject matter experts (SMEs) who understand the details of the application’s functionality play an important role in the testing team. The following list describes these concepts in more detail. ■



Subject matter expertise. A technical tester might think it is feasible to learn the subject matter in depth, but this is usually not the case when the domain is complex. Some problem domains, such as tax law, labor contracts, military procurement, and regulatory compliance from the Food and Drug Administration (FDA) and Federal Energy Regulatory Commission (FERC), may take years to fully understand. It could be argued that detailed and specific requirements should include all the complexities possible, so that the developer can properly design the system and the tester can properly plan the testing. Realistically, however, budget and time constraints often lead to requirements that are insufficiently detailed, leaving the content open to interpretation. Even detailed requirements often contain internal inconsistencies that must be identified and resolved. For these reasons, each SME must work closely with the developer and other SMEs on the program (e.g., tax-law experts, financial expert, human resources law experts) to parse out the intricacies of the requirements. Where there are two SMEs, they must be in agreement. If two SMEs cannot agree, a third SME’s input is required. A testing SME will put the final stamp of approval on the implementation after appropriate testing. Technical expertise. While it is true that a thorough grasp of the applications to be tested is a valuable and desirable trait for a tester, the tester’s effectiveness will be diminished without some

08_4782

2/5/07

11:11 AM

Page 199

Assembling the QA/Test Team



199

level of understanding of software (including test) engineering. The most effective SME testers are those who are also interested and experienced in the technology—that is, those who have taken one or more programming courses or have some related technical experience. Subject matter knowledge must be complemented with technical knowledge, including an understanding of the science of software testing. Technical testers, however, require a deeper knowledge of the technical platforms and architectural makeup of a system in order to test successfully. An SAP installation consists of the core R/3 system, SAP bolt-ons, and the systems interfacing with SAP. A technical tester should know how to write automated scripts with automated test tools; know how to write a test harness; and understand such technical issues as compatibility, performance, and installation, in order to be best prepared to test for compliance. While it is beneficial for SMEs to possess some of this knowledge, it is acceptable, of course, for them to possess a lower level of technical expertise than do technical testers. Experience level. A testing team is rarely made up exclusively of expert testers with years of expertise—nor would that necessarily be desirable. As with all efforts, there is room for apprentices who can be trained and mentored by more senior personnel. To identify potential areas for training and advancement, the test manager must review the difference between the skill requirements and an individual’s actual skills. A junior tester could be tasked with testing the lower-risk functionality or cosmetic features such as the graphical user interface (GUI) controls (if this area is considered low risk). If a junior tester is tasked with testing of higher-risk functionality, the junior tester should be paired with a more senior tester who can serve as a mentor. Although technical and subject matter testers contribute to the testing effort in different ways, collaboration between the two types of testers should be encouraged. Just as it would take a technical tester a long time to get up to speed on all the details of the subject matter, it would take a domain or subject matter expert a long time to become proficient with technical issues such as test case automation. Cross-training should be provided to make the technical tester acquainted with the subject matter and the SME familiar with the technical issues.

08_4782

2/5/07

11:11 AM

Page 200

200

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

NUMBER OF RESOURCES The number of test and QA resources needed for a project largely depends on the expected number of modules, business processes, interfaces, security roles, conversions, reports, and enhancements to be included for the release based on the project scope and the expertise of the resource. Frequently, SAP projects struggle to identify the expected number of test cases since no formal approach exists for documenting and managing requirements, which makes it difficult to anticipate the necessary level of effort for testing. Testing activities include test planning, test design for both manual and automated test cases, test scheduling, test execution, test reporting, and retesting as needed. A given SAP implementation may have as many as 1,000 of in-scope out-of-the-box, and custom SAP transactions in addition to multiple security roles and advanced business application programming (ABAP) programs for a system release. An SAP upgrade may include changes to the system functionality, GUI, and database, and the inclusion of SAP hot packs, which require thousands of testing man-hours. Furthermore, projects that implement SAP processes based on end-to-end processes such as order-tocash and build-to-order may need to test the processes and their corresponding variations, which can create hundreds of test cases. Manual testing is time consuming, prone to error, difficult to coordinate, and resource intensive. The project team may need to combine manual testing efforts with automated testing efforts to create a test team capable of providing coverage for all test cases and subsequent retesting when defects are reported. As a result, it is recommended that projects pair up at least one testing resource per module or end-to-end process, depending on the structure of the functional SAP teams. The testing team also will need at least one resource to support testing of SAP bolt-ons (e.g., APO, SRM) and one to two resources for development (ABAP) objects. Additional testing resources will be needed to maintain and support test tools and for performing load/stress testing. The QA team will need at least two to three resources to set up and enforce standards. The following estimates are provided for test planning, test design, test execution, and test reporting. When the number of test cases is identified, the following guidelines may help to estimate the expected number of man-hours needed to complete a testing phase.

08_4782

2/5/07

11:11 AM

Page 201

Assembling the QA/Test Team

201

Unit ■



Two to three hours to design each unit test case (also include negative testing conditions for error handling and exception messages). Two to three hours to execute each test case, which includes at least two manual runs and recording of test results.

String (Scenario) ■ ■

One to two days to design each string test case (combines multiple SAP transaction codes; includes SAP roles). Four to six hours to execute each test case, which includes at least two manual runs and recording of test results.

Integration Testing (Scenario) ■

■ ■ ■

Two to three days to design each integration test case (combines multiple SAP transaction codes with data from external systems; includes SAP roles, preconditions, and postconditions). Four to six hours to execute each test case, which includes at least two manual runs for manual and recording of test results. Six to eight days to automate an integrated test case for each endto-end process (assuming system is stable). Up to 30 minutes to execute an automated test case assuming multiple sets of data.

Performance Testing (Scenario) ■

Four to six days to automate an integrated test case (assuming system is stable).

TEAM COMPOSITION The QA team can consist of two to three permanent members. It is primarily concerned with defining, communicating, and enforcing

08_4782

2/5/07

202

11:11 AM

Page 202

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

standards for preventing system errors and documenting test plans. The number of dedicated QA resources is fairly static throughout the life cycle of an SAP project. A typical SAP QA team can consist of the following permanent members: ■



One QA manager, who is primarily responsible for drafting and documenting test plans, documenting risks, and setting and defining standards. One to two QA members responsible for enforcing testing standards and generating reports and metrics to support company audits.

Unlike the QA team, the test team may grow exponentially in size, depending on the testing effort at hand. For instance, during the project preparation phase, the organization may not have any formal testing roles or any test team members. However, during the integration testing effort, the test team may expand from its permanent members to include members from the entire project structure. Members of other groups or teams may become “temporary” testers who are “borrowed” by the test team during string, integration, and stress testing and perform testing activities such as documenting test cases, sequencing test cases, executing test cases, and resolving defects. Some “borrowed” testing members from other groups may not support testing activities directly but may perform dependent testing activities such as data loads, establishing test environment, and creating user test roles that are critical to the testing efforts. At some point during the life cycle of the SAP implementation, upgrade, or production support, the number of permanent test team members will be augmented with “borrowed” test team members from other groups. The permanent test team consists of the test manager and the individuals who support and maintain the test tools, develop automated test cases, are part of an outsourcing agreement, support the test lab, and execute test cases. The permanent test team members are experienced in testing methodologies, automated test tools, and linking requirements to test cases; understand SAP functionality and processes; and can execute documented test cases independently. A heuristic to determine the necessary number of permanent test team members would be one resource per identified end-to-end process

08_4782

2/5/07

11:11 AM

Page 203

Assembling the QA/Test Team

203

(i.e., one test team member to provide coverage for the order-to-close team). A typical SAP test team can consist of the following permanent and dedicated testing members: ■ ■

■ ■



■ ■

One test team manager. One or two test team members for testing development objects that include reports, interfaces, conversions, enhancements, workflow, and forms. One test team member per bolt-on (i.e., APO, CRM, BW, etc.) One test team member for each end-to-end process (i.e., purchaseto-pay, order-to-cash, hire-to-retire, etc.). Typical responsibilities include test tool functional automation, test case execution, and reporting of defects. Two test team members for stress/performance/load testing. These members create and maintain automated test cases for performance testing, help the project stay compliant with established service-level agreements (SLAs), and ensure that the project’s response times remain optimal. Performance testing is an ongoing activity for an SAP implementation and should be performed when system changes are implemented, which include OSS notes, system patches, enhancements, and hot packs; new divisions are added; GUI is upgraded or reconfigured; modules or bolt-ons are added; new interfaces and conversion programs are added; and so on. One or two test team members for support, maintenance, and customization of test tools and test lab. One leader to manage all automation efforts or serve as a liaison for outsourcing test automation efforts.

The assumption is made that the test team members described above are at the experienced level and not necessarily entry-level staff. The number of test team members may also be increased through the use of outside contractors, particularly for SAP projects that will automate test cases. Contractors that specialize in automated test tools may perform functions for which there is no in-house expertise from the project’s test team. The following situations are examples in which outside contractors may augment the existing testing resources:

08_4782

2/5/07

204 ■ ■ ■ ■ ■

11:11 AM

Page 204

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Stress testing and performance tuning for the SAP CRM Sales Internet application prior to a go-live. Building a library of automated test cases for a new system module. Training for a new automated test tool. Mentoring of testers recently trained on test tools. Execution of test cases due to an increase in project scope.

In addition to the permanent testing members, a test team can consist of borrowed resources from other teams. The activities for the borrowed testing resources include identification of test data, providing SAP navigational support, refinement of documented test cases, execution of manual test cases, coordination of ABAP objects with legacy system owners, identification of SAP roles, and resolution of reported testing defects. Test team members can come from other groups, including the change management team, SMEs, configuration team, legacy owners, development team, end users, Basis team, and the outsourcing partner. For initial testing efforts such as unit testing, the test team members may primarily consist of borrowed testers from the configuration, development, and security teams that plan, design, and execute test cases, while the permanent testing resources provide maintenance support for the underlying test tools used for unit testing. When other testing efforts such as string (scenario), integration, performance, and user acceptance are initiated, the test team consists of permanent and borrowed testing resources. For instance, the test team can increase in size by as much as 50 percent with borrowed resources from unit testing to user acceptance testing (UAT) for the following reasons: ■ ■ ■ ■ ■

Resources from the change management team would be needed to select end-user participants for the UAT. Members from the configuration and development teams would need to assist the UAT participants and resolve their defects. Security team members would need to identify the testing roles for the UAT participants. QA team members would have to enforce standards. The basis team would have to erect an SAP test client for UAT testing.

08_4782

2/5/07

11:11 AM

Page 205

Assembling the QA/Test Team ■ ■

205

The dedicated test team members would have to maintain test tools and coordinate defect resolution. The UAT participants that are primarily system end users would execute the UAT test cases.

Exhibit 8.3 shows multiple types of testing efforts along with the project teams supporting the testing effort. This exhibit reveals that the integration-testing phase requires assistance from more project teams than any other testing effort.

EVALUATING TESTERS 1 Maintaining an effective test program requires that the implementation of its functions, such as the test strategy, the test environment, and the test team makeup, are continuously evaluated and improved as needed. Test managers are responsible for ensuring that the testing program is being implemented as planned and that specific tasks are being executed as expected. To accomplish this, the implementation of the test program has to be tracked, monitored, and evaluated, so it can be modified as needed. At the core of the test program execution are the test engineers. The ability of a tester to properly design, document, and execute effective tests, accurately interpret the results, document any defects, and track them to closure is critical to the effectiveness of the testing effort. The test manager could plan the perfect testing process and implement the best strategy, but if the testing team members do not effectively execute the testing process (such as participating effectively in requirements inspections or design walkthroughs) and omitting other strategic testing tasks as assigned (such as executing a specific test procedure), important defects could be discovered too late in the life cycle, resulting in increased costs. At worst, defects will be completely overlooked and make their way into production software. A tester’s effectiveness can also make a big difference in the interrelationships with other groups. A tester who always finds bogus errors or reports “user” errors, meaning the application works as 1 Adapted from Elfriede Dustin, Effective Software Testing, Reading, MA: Addison Wesley, 2002.

206

• Development • Configuration • Change management • QA • Security • Test

Development Configuration QA Security Test

• • • • •

• Development • Configuration • Change management • QA • Security • Production • Test • Integration

Regression • Development • Configuration • Change management • QA • Security • Test • End users • SMEs • Basis • Data • Legacy system owners

Integration • Development • Configuration • Change management • QA • Security • Test • End users • SMEs

UAT

• • • • • • •

Development Configuration Test Basis Data Network Database

Performance

11:11 AM

String

2/5/07

Unit

EXHIBIT 8.3 Teams Involved with Testing Activities Depending on the Type of Test Effort

08_4782 Page 206

08_4782

2/5/07

11:11 AM

Page 207

Assembling the QA/Test Team

207

expected, but the tester misunderstood the requirement, or worse, a tester who often overlooks critical defects will lose credibility with other team members and groups and can tarnish the reputation of an entire test program. Evaluating the tester’s effectiveness is a difficult and often subjective task. Besides the typical evaluations of any employee’s attendance, attentiveness, attitude, and motivation, there are specific, testing-related measures against which a tester can be evaluated. The evaluation process starts with the recruitment efforts. Hire a tester with the skills required for the roles and responsibilities assigned to the position. Based on the defined skills you are requiring for a specific hire, you can base your expectations and task assignments on those skills. All testers need to be detail-oriented and possess analytical skills, independent of whether they are technical experts, subject matter experts, security, or usability testers. Once you have hired the right tester for the job, you have a good basis for evaluation. In the case where a testing team is “inherited,” this type of evaluation criteria cannot be applied upfront. In this case, it is necessary to come up to speed on the various testers’ backgrounds, so the team can be tasked and evaluated based on their experience, expertise, and background, and be reassigned to other roles as needed. A test engineer’s performance cannot be evaluated unless there are roles and responsibilities, tasks, schedules, and specific standards to follow. First and foremost, the test manager must make sure to state clearly what is expected and by when it is expected from the test engineer. Expectations, or the “what” and “by when,” need to be clearly communicated, and tasks must be meticulously outlined and documented. The following list describes a typical set of expectations that must be communicated to testers. ■



Standards and procedures. The test engineer needs to be aware of which standards and procedures need to be followed, and processes must be communicated. Schedules. Testers must be aware of the test schedule, which should indicate when test plans, test designs, test procedures, scripts, and other testing products must be delivered. In addition, the delivery schedule of software components to testing should be known by all testers.

08_4782

2/5/07

208 ■



11:11 AM

Page 208

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Set goals and assign tasks. Document and communicate the tasks and schedule deadlines, specific to each tester. Both the test manager and the test engineer have to agree on the assigned tasks. Budgets. In the case of the tester evaluating a testing tool or other form of purchased technology, the available budget must be known so that they can work within that range and avoid wasting time evaluating products that are too expensive.

Expectations and assignments differ, depending on the task at hand and the skill set of the tester. For example, different types of tests, test approaches, techniques, and outcomes are expected. Once the expectations are set, the test manager can start comparing the production of the test team against the preset goals, tasks, and schedules, measuring the effectiveness and implementation. The following is a list of items to consider when evaluating testers’ effectiveness. ■

Subject matter expert versus a technical expert. The expertise expected from a subject matter expert is related to the domain of the application, while a technical tester will be concerned with the technical issues of the application. In the case of a technical tester functioning as an automator, automated test procedures should be evaluated based on defined standards, which must be followed by the test engineers. For example, did the engineer create maintainable, modular, reusable automated scripts, or do the scripts have to be modified with each new system build? In an automation effort, did the tester follow best practices? For example, did the test engineer make sure that the test database was baselined and could be restored for the automated scripts to be rerun? If the tester is developing custom test scripts, or a test harness, the tester will be evaluated on the same criteria as a developer, namely readability and reliability of the code. A tester who specializes in the use of automated tools, yet does not understand the intricacies of the application’s functionality and underlying concepts, will usually be ineffective. The automated scripts generated based only on high-level knowledge of the application will often find less important defects. It is

08_4782

2/5/07

11:11 AM

Page 209

Assembling the QA/Test Team





209

important that the automator understands the application’s functionality in order to be an effective member of the testing team. Another area of evaluation is technical ability and adaptability. Is the test engineer capable of picking up new tools and becoming familiar with their capabilities? Testers should be trained on tool capabilities, if they are not aware of all of them. Experienced versus novice tester. As mentioned, the skill level of the tester also needs to be taken into account. For example, novice testers may overlook some errors, or not realize they exist or are defects; therefore, it is important to assign lower-risk testing areas to the novice tester. Experienced testers may ignore some classes of defects, based on past experience (“the product has always done that”) or the presence of workarounds. While this may or may not be appropriate, testers will acclimate or assimilate this knowledge into their testing and not report defects seemingly unimportant to them, yet which end users might find unacceptable. Type of tests, functional versus nonfunctional, such as performance and security. Evaluate a tester’s understanding of the various testing techniques available and knowledge of which technique is the most effective to apply for the task at hand. If the tester doesn’t understand the various techniques and applies a technique ineffectively, test designs, test cases, and test procedures will be adversely affected. The evaluation of functional tests can additionally be based on a review of the test procedures. Typically, testers are assigned to write test procedures for a specific area of functionality based on assigned requirements. As part of an effective test process, test procedure walkthroughs and inspections should be conducted that will include the requirements, testing, and development teams. During the walkthrough verify that all teams agree on the behavior of the application.

Consider the following during an evaluation of functional test procedures: ■

The completeness of the mapping of the test procedure steps to the requirements steps. Is the traceability complete?

08_4782

2/5/07

210 ■ ■ ■ ■ ■



11:11 AM

Page 210

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

The correctness of the test input, steps, and output (expected result). Are major testing steps omitted in the functional flow of the test procedure? Has an analytical thought process been applied to come up with effective test scenarios? Have the test procedure creation standards been followed? How many revisions are required to consider the test procedures effective and complete, related to misunderstanding or miscommunication? Have effective testing techniques been applied to derive the appropriate set of test cases?

During a test procedure walkthrough, in addition to verifying the mapping to requirements, also evaluate the “depth” or effectiveness of the test procedure. This is somewhat related to the depth of the requirement steps. What does the test procedure test? Does it test the functionality at a high level or does it really dig deep down into the underlying functionality of the application? For example, a functional requirement might state, “The system should allow for adding records of type A.” A high-level test procedure will test that the record can be added through the GUI. A more effective test procedure will have additional steps in place that test the areas of the application that are affected when this record is added. Those additional steps could include a SQL statement that verifies that the record appears correctly in the database tables, plus additional steps verifying the record type. If test procedures are at a very high level, evaluate first that the requirements are at the appropriate level and pertinent details are not missing. However, if the detail is in the requirement, but is missing in the test procedure, the test engineer might need additional coaching on how to write effective test procedures. Different criteria are required for evaluating the effectiveness of the different types of test procedures, functional versus nonfunctional. Nonfunctional tests will have to be designed and documented in a different manner than functional test procedures, for example. ■

Testing phase (i.e., alpha test, beta test, system test, acceptance test, etc.). Different tasks are expected from the tester depending on the testing phase, and they have to be considered.

08_4782

2/5/07

11:11 AM

Page 211

Assembling the QA/Test Team







211

During system testing, the tester will be responsible for all testing tasks, including the development and execution of test procedures, as well as tracking defects to closure, and so on. For example, during alpha testing, a tester might simply be tasked with recreating and documenting defects the “alpha testing” team is reporting, usually done by a company’s independent testing team (independent verification and validation [IV&V] team). During beta testing the tester might actually be tasked with documenting beta test procedures for the beta tester to execute, in addition to recreating and documenting defects found by the “beta” testers (customers are often recruited to become beta testers). The phase of the development life cycle. As mentioned throughout this book, testers should be involved from the beginning of the life cycle. For example, during the requirements phase, the tester can be evaluated based on defect prevention efforts, such as discovery of testability issues or pointing out requirement inconsistencies. A tester’s evaluation can be subjective and many variables have to be considered, without jumping to the first seemingly obvious conclusion. For example, when evaluating the test engineer’s testing of requirements during the requirements phase, it is also important to evaluate the requirements themselves. If the requirements are poorly written, even an average tester can find many defects. However, if the requirements are well laid out and above average, only a really good tester can find the most involved defects. Follows instructions and attention to detail. It is also important to consider how well a test engineer follows instructions and pays attention to detail. Unreliability is a bad test engineer trait and follow-through has to be monitored. If test procedures need to be updated and executed to ensure a quality product, the test manager must be confident that the test engineers will carry out this task. If tests have to be automated, the test manager should be confident that progress is being made. Weekly (or daily in the final stages of a testing phase) status meetings where engineers report on their progress are useful to track and measure progress. Types of defects, defect ratio, and defect documentation. The type of defects found by the engineer must also be considered during

08_4782

2/5/07

212

11:11 AM

Page 212

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

the evaluation. However, there are many caveats to consider when using this metric to evaluate a tester’s effectiveness. Keep in mind the skill level of the tester, the type of tests being performed, and the testing phase being conducted, and consider such factors as the complexity and the maturity of the application under test. Defects found not only depends on the skill of the tester, but on the skill of the developer who wrote, debugged, and unit tested the code, and the walkthrough/inspection teams who reviewed the requirements, design, and code, hopefully removing most of the defects before formal testing. Additionally, evaluate in this context whether the test engineer finds errors that are complex and domain related or only cosmetic. For example, cosmetic defects such as missing window text or control placement are relatively easy to detect and become high priority during usability testing, whereas more complicated problems relating to data or cause-and-effect relationships among elements in the application are more difficult to detect, and require a better understanding of the application and become high priority during functional testing. By contrast, cosmetic defect fixes, since they are most visible, make the customers happy. What about the tester who is responsible for testing a specific area where most defects are discovered in production? First, the test manager will need to evaluate the area for which the tester is responsible. Is it the most complex, error-prone area? If it is a very complex area and the product was released in a hurry, the defect can be more understandable. Further evaluate what types of defects were discovered in production. Could those defects have been discovered if a basic test procedure had been executed, as part of the existing test procedure suite, and there was plenty of time to execute the test procedure? If so, this would be a major oversight on the part of the tester responsible for this area. However, before passing judgment, there are some additional points to consider: ■

Was the test procedure supposed to be executed manually? Is the manual tester tired of executing the same test procedures over and over again, and now, the fiftieth time, he felt it should be safe not to execute the tests, because that part of the application has

08_4782

2/5/07

11:11 AM

Page 213

Assembling the QA/Test Team







213

always worked? In this case, automated regression tests, should be strongly considered. Was the software delivered under time pressures, preventing a full test cycle, yet the deadline couldn’t be moved? Releases shouldn’t go out without having met the release criteria. Was this test automated? Are details missing in the automated script and the script missed testing that one step? In this case, the automated scripts have to be reevaluated. Is it a defect that was discovered using some combination of functional steps that are barely executed? This type of defect is more understandable.

Additionally, it may also be important to go back and look at the test goals, risks of the project, and assumptions made when the test effort started. If it was decided not to conduct a specific type of test due to time constraints or low risk, then the tester should not be held responsible afterward for not finding defects this test could have uncovered. The fallout would be a cost of the assumed risk, and the risk was assumed at the beginning of the project with full knowledge of the possibility of problems. Effectiveness can also be evaluated by examining “how” a defect is documented. Is there enough detail in the documented defect for a developer to be able to recreate the problem? Are developers always having a difficult time recreating one specific tester’s defects? Make sure there are standards in place that document exactly what information is required in a documented defect and that the defect tracking life cycle is well communicated and understood. For more discussion on the defect tracking life cycle. The testers need to follow these instructions. The outcome of the evaluation could point to various issues. In the case of the test procedure lacking detail, which should be discovered during the test procedure inspection phase, it could be possible that the tester didn’t quite understand the requirement, among many other plausible causes. Determine the cause of the issue and try to come up with a solution. Each issue needs to be evaluated as it arises, before a judgment call regarding the tester’s capability is made. After careful evaluation of the entire situation, after additional coaching has been provided, it will be possible to get an idea of how detail-oriented, analytical, or

08_4782

2/5/07

214

11:11 AM

Page 214

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

effective this tester is. In the case when it is determined that the tester lacks attention to detail or analytical skills or there are communication issues, that tester’s performance might have to be specifically monitored and reviewed, requiring additional instruction, training, and improvement. Testers’ effectiveness must be constantly evaluated to ensure the success of the testing program. It is important to evaluate the tester’s effectiveness on an ongoing basis, not only to determine training needs, but most importantly to ensure the success of the testing program.

TEST ENGINEER SELF-EVALUATION Test engineers themselves should assume some responsibility for evaluating their own effectiveness. The following list can be used as a starting point for test engineer self-evaluation: ■

Consider the types of defects you are discovering. Are they important defects, or are they mostly “cosmetic” and low-priority defects? For example, if you consistently uncover only lowpriority defects during a functional testing phase, such as hot keys not working or typos, reassess the effectiveness of your test procedures. (Note: Keep in mind that during usability testing, for example, the priority of the type of defects described changes.) ● Are test procedures detailed enough, covering the depth and combinations and variations of data and functional paths necessary to catch the higher priority defects? Are tests including “invalid data,” as well as “valid data”? ● Did you receive and incorporate test procedure feedback from requirements, testing, and development staff? If not, ask for test procedure reviews, inspections, and walkthroughs involving those teams. ● Are you aware of the testing techniques available, such as boundary values testing, equivalence partitioning, and orthogonal arrays, in order to be able to derive the most effective test procedures? ● Do you understand the intricacies of the application’s functionality and domain well? If not, ask for an overview or addi-

08_4782

2/5/07

11:11 AM

Page 215

Assembling the QA/Test Team





215

tional training session. If you’re a technical tester, ask for help from an SME. Are you discovering the major defects too late in the testing life cycle? If you are consistently discovering major defects very late in the life cycle, consider the following: ● Does your initial testing focus on the low-priority requirements? Make sure to focus your initial testing on the highpriority, highest-risk requirements. ● Does your initial testing focus on regression testing of already existing functionality that was working previously, and hardly ever broke in the past? Make sure to focus your initial testing on code changes, defect fixes, and new functionality. Focus on regression testing later. Ideally, the regression testing efforts are automated, so you can focus your testing efforts on the newer areas. Are there any areas you are testing exhibiting suspiciously low defect counts? If so, these areas should be reevaluated: ● Determine if the test coverage is robust enough. ● Determine that the types of tests you are executing are most effective. Are important steps missing? ● Analyze the complexity of this application’s area under test. Evaluate whether this functionality has low complexity. ● The functionality was implemented by your most senior developer(s) and it has been unit and integration tested so well, that no major defects are to be found. Consider the defect work flow:







Are you documenting defects in a timely manner? (That is, as soon as a defect is discovered, after determining it really is a defect, it should be documented. Don’t hold off documenting the defect.) Make sure to document defects, following the defect documentation standards. If there aren’t any documented defect documentation standards, request those from your manager (i.e., list all information that needs to be in a defect, in order for the developer to be able to reproduce it). If a new build is received, focus your initial testing on the defects in “retest” status. It is important that fixed defects are retested as

08_4782

2/5/07

216







11:11 AM

Page 216

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

soon as possible, so the developers know whether their defects are really fixed. Make sure to continuously evaluate your requested defects for comments from development, whether they require additional information or additional testing steps. Be eager to track defects to closure. Examine the comments added to your defects to determine how developers or other testers receive them. If many defects are marked often as “works as expected” or “cannot be reproduced,” this may actually be the case; however, it could also signal a few other things: ● Your understanding of the application is inadequate. In this case, more training is required. Request help from domain experts (SMEs). ● The requirements may be ambiguous and need to be corrected (this, however, is often discovered during the requirements and/or test procedure walkthrough/inspections). ● Your defect documentation skills are not as effective as they could be. This may lead to a misunderstanding of the defect and the developer will need additional steps to reproduce it. ● The developer is assuming a false implementation of the requirement. ● The developer might lack the patience to follow the detailed documented defect steps to reproduce the defect. Monitor if defects are discovered in your test area after the application has gone to production. Evaluate the defect. Why was it missed? ● Did you not execute a specific test procedure that contained the steps that would have caught this defect? If yes, why was it not executed? Are your regression tests automated? ● Did no test procedure exist that would have caught this defect? Why was this procedure not developed? Was this area considered low risk? Evaluate your test procedure creation strategy. Add the test procedure to your regression test suite. ● Evaluate how you could create more effective test procedures; evaluate the test design, test strategy, and test technique; discuss with your peers or manager. ● Was there not enough time available to execute an existing test procedure? Let management know, before the application goes

08_4782

2/5/07

11:11 AM

Page 217

Assembling the QA/Test Team



217

live or is shipped, not after the fact. This sort of situation should also be discussed in a posttest/preinstallation meeting, and be documented in the test report. Do other testers during the course of their work discover defects that were your responsibility? Evaluate the reasons and make adjustments accordingly.

There are many more questions a tester can ask him- or herself related to testing effectiveness, depending on the testing phase and testing task at hand, type of expertise (technical versus domain), and experience level. An automator might want to make sure that he or she is familiar with the automation standards and best automation practices. A performance tester might request additional training in the performancetesting tool used. Self-assessment of the tester’s capabilities and then the associated improvement steps are an important part of an effective testing program.

08_4782

2/5/07

11:11 AM

Page 218

09_4782

2/5/07

11:13 AM

Page 219

CHAPTER

9

Establishing and Maintaining Testing Resources AP testing requires multiple resources for planning and executing test cases, reporting of test results, and the resolution of defects. Establishing, identifying, and scheduling all the necessary resources to complete the SAP testing effort depends on the size of the organization, scope of the test, type of test, and the regulations that govern how the organization performs its business processes. Resources for an SAP test include:

S

■ ■ ■ ■ ■ ■

■ ■

Individuals to develop the test scenarios and resolve defects. Individuals to develop automated test cases. File cabinets or scanners for storing test results with handwritten signatures in regulated environments. Establishment of the SAP instance needed to execute the test cases. Test tools to develop, store, and execute automated test cases. Software tools for managing requirements, configuration management, creating business process procedures (BPPs), developing flow process diagrams, and tracking SAP transports. Machines where test cases will be executed. A room (test lab) where test machines for executing the test cases are stored.

Identified resources may come from different parts of an organization. For instance, for a global SAP implementation rollout, individuals acting as super-users and executing test cases for the user acceptance test (UAT) may come from an international company plant, whereas for string testing the individuals needed to execute the test cases may originate from the configuration and test teams. Once 219

09_4782

2/5/07

11:13 AM

220

Page 220

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

resources are identified for a test effort they will need to be scheduled and confirmed prior to the test to avoid unexpected testing delays or conflicts with other teams performing tests on non-SAP applications. Furthermore, resources such as hardware components and software packages will require dedicated maintenance and potential upgrades to avoid becoming obsolete or losing their usefulness.

TEST LAB A test lab is a physical facility dedicated for testing that serves as a war room and allows the SAP test team and other project teams to congregate for executing test cases and resolving defects. A test lab will have the necessary machines (i.e., workstations such as desktops and/or laptops) and software installations to conduct an SAP test. The machines in the test lab often have more disk space, memory, and a higher-grade processor than the individual machines from the test team members, which make it possible to install multiple software packages, including automated test tools, different operating systems, and drivers. Furthermore, machines dedicated to a test lab can be reimaged and formatted for different testing efforts without causing downtime to the individual’s personal machine. An example of a test lab would be a stress test where emulated end users performing SAP processes in large quantities are deployed into multiple machines that need large storage and memory. With a test lab trial, runs for a stress test can be simulated across multiple machines without affecting individual personal machines. Another suitable example that illustrates the use for a test lab would be an SAP UAT with end users from remote company locations congregating to execute test cases prior to the system go-live date. A test lab allows end users for UAT to execute test cases on the test lab machines without having to request the services of the help desk team to reimage or rebuild the individual end user’s machines. Without a test lab, UAT participants would have to meet in a makeshift facility lacking the proper hardware and software components to execute UAT test cases. A test lab will also contain other hardware that facilitates the testing effort, including a dedicated local area network (LAN), printers, fax machines, and a phone line. The test team is appointed the re-

09_4782

2/5/07

11:13 AM

Page 221

Establishing and Maintaining Testing Resources

221

sponsibility of maintaining the test lab and all hardware and software resources contained within the test lab. The test team schedules the test lab for various testing efforts, conducts training for the test team in the training lab, and holds demonstrations in the test lab.

SOFTWARE RESOURCES Third party software tools, SAP’s internal solutions, or in-house created applications facilitate and expedite the various testing phases of an SAP implementation in the areas of test design, test execution, test reporting, and training. The software tools are resources that are in place to support and assist the testing efforts. The following tools are frequently used at SAP installations to enhance the capabilities of the test team: ■





Test tool software. Test tools can be used for test management and for test case automation. Test management tools allow planning and creation of manual and automated test cases, test case execution, defect tracking, and change impact analysis for SAP transports. Test tools are used for developing automated test cases for both functional and capacity testing. Test tools can be purchased from third-party vendors or obtained from SAP’s internal native test scripting tool, Extended Computer Aided Test Tool (eCATT). Some vendors of test tools offer solutions that integrate with eCATT to enhance and extend its capabilities. BPP authoring tools. BPPs are the cornerstone for SAP training at the SAP transactional level but can be enhanced and augmented to include test conditions that consequently lead to the creation of unit test cases. BPPs can be created manually in a text editor such as Microsoft Word or with third-party tools. Requirement management tools. As the name implies, requirements management tools assist in the gathering and maintenance of a requirement. An example of a functionality requirement would be “the system will allow creation of fixed-cost contracts”; a performance requirement would be “the system will have response times that do not exceed a response time of three seconds per screen when the maximum expected number of users are logged on to the application through a browser and corporate

09_4782

2/5/07

222









11:13 AM

Page 222

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

LAN.” A requirement may undergo multiple changes that require management throughout the life cycle of an SAP implementation or upgrade. Requirements management tools help you manage all changes to a requirement, while producing audit trails, a single and secured repository for managing the requirement, and provide coverage for a requirement through a requirement traceability matrix (RTM). Requirements management tools can be created in-house with a database such as Microsoft Access or purchased from third-party vendors. Monitoring tools. A capacity test such as a load, volume, and stress test require monitoring of multiple application components. SAP R/3 has the internal monitoring tool CCMS but monitoring other components such as the network, database, servers, and so on will require monitoring tools from third-party vendors. Transport management tools. Transporting SAP objects must occur in the right sequence and when moved into a live production environment the transports require signatures and approvals from multiple stakeholders. In-house forms, manuals, e-mails, spreadsheets, and templates can be created to manage the transport of objects into different SAP instances or commercial solutions can be acquired from third party vendors for managing the transport of all objects after testing activities have been successfully completed and all associated documentation for a transport has been completed. Flow process diagramming tools. Requirements, functional scenarios, and business processes are frequently diagrammed. Diagramming techniques vary in formality from Unified Modeling Language (UML) diagrams to flow process diagrams. Many projects diagram their processes with Visio, but diagrams that include integration points and have dependencies on other processes require other types of diagramming tools. All diagrams should be accompanied with narratives that provide at a minimum: (1) description of the diagram, (2) the underlying assumptions in creating the diagrams, (3) the expected roles of the end users for the diagrammed process, (4) the dependencies for the diagrams, and (5) the frequency with which the diagrammed process is executed. Version control tools. Version management consists of locking down objects, check-in and check-out of objects, promoting of objects from one environment to the next, maintaining multiple

09_4782

2/5/07

11:13 AM

Page 223

Establishing and Maintaining Testing Resources



223

versions of the same objects with audit trails, and allowing access to objects based on defined security roles. Version control must be performed on all testing deliverables such as test cases, test execution logs, automated test cases, and manual test cases. Version control tools are available from third-party vendors, or version control functionality can be included within the project’s Internet portal (if one is in place). Data loading tools. A dedicated test environment will be needed to execute scenarios and test cases. Loading and refreshing data to construct the test environment manually is time consuming and tedious. SAP offers the Test Data Migration Server software to help construct the test environment and migrate all necessary data from other environments.

Software tools that facilitate testing will need upgrades, maintenance, and customization during their lifetime. Many of these tools whether created in-house or purchased from a software vendor are not part of the standard company software image and thus receive little or no support from the local help desk team. The project teams that procured the software tools will need to assign administrators to apply patches to the tools, install the tools, back up the files created with the tools, customize the tools, and provide in-house training for the tools. The tools administrators evaluate and assess the impact of a change to the tool. For example, for a commercial solution, the vendor may provide patches to resolve a tool’s defect or enhance the tool’s capability. The administrator will need to assess how a vendor patch will affect the tool’s customizations. For a vendor patch, the tool’s administrator may also have to update the training materials for any changes that the patch causes to the tool’s customization, fields, screen layouts, or functionality. In-house created tools do not get vendor patches and need to be maintained internally. They require much design documentation in the event of employee turnover for the individuals who created and designed the in-house tool. In-house tools typically do not come with vendor-delivered training materials, vendor online or phone support, maintenance agreements, or context-sensitive online help, which increases the risk of tool failure or complaints from the tool’s end users. However, the benefits of an in-house–created tool are that they can

09_4782

2/5/07

11:13 AM

Page 224

224

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

meet a project’s specific needs that no commercially available tool can meet except with extreme modifications to the source code and they are not subject to maintenance licenses fees. The project will need to determine the risks and rewards of developing an in-house tool versus procuring a commercially available solution. Depending on the project’s budget and deadlines, a needed tool can be supplanted with a mixture of manual processes, e-mails, and spreadsheets that are highly inefficient in the long run but solve a short-term need for the project. The procurement or creation of a tool underscores the need to support and maintain the tool.

HARDWARE RESOURCES Hardware resources consist of machines such as workstations, printers, and phones, needed for the test lab and a data center to conduct functional, technical, and system testing. Most projects building a test lab will either “borrow” hardware resources from other departments and place them within the test lab or purchase the hardware resources from various manufacturer vendors. In order to install test tools the workstation machines will need to meet or exceed the memory, operating system, and processor requirements from the test tools vendors. Depending on its scope, a project may allocate one to two workstations per functional team or a fixed number of workstations for the entire project based on the total number of present and future requirements. Projects emulating hundreds or thousands of end users for a stress test will need to ensure that the existing workstations either have sufficient memory or that more workstations can be borrowed from other departments or the data center. Emulated end users for a stress test can consume as many as five megabytes of random access memory (RAM) per emulated end user in addition to memory requirements for the actual stress-testing tool. Once the workstations are identified the project will need to assign a team to the workstations for maintenance. The workstations will either have the company’s standard image supported by the local help desk that includes periodic software updates or an ad-hoc image that is nonstandard and thus receives no support from the local help desk. Typically, the local help desk does not support problems, issues, bugs, or environmental conflicts with automated test tools on work-

09_4782

2/5/07

11:13 AM

Page 225

Establishing and Maintaining Testing Resources

225

stations assigned for testing, which then falls under the responsibility of the assigned test tool administrator. In addition to workstation machines, the test team or project manager will need to ensure that testers are equipped with printers to capture screenshot printouts for system defects, and print out test cases, test logs, and test reports.

ENVIRONMENT RESOURCES Environment resources consist of the SAP client landscape where the solution is designed, tested, and subsequently transported into production. The dedicated test team owns and controls the test environment, usually designated with the letters “TST,” in the SAP client landscape. The test team has the ability to change fiscal years and posting periods, refresh the system, and make client copies as needed to simulate test conditions within the test environment. The test environment is a critical resource for the test team to design, debug, and play back automated test cases. The test team must not allow configuration and development changes to occur in the TST environment. All development, security, and configuration changes must first occur in the development environment, usually designated with the letters “DEV,” and then be transported into TST under a controlled process and with approval from the test team.

INDIVIDUAL RESOURCES The number of resources needed to support an SAP testing effort is largely a function of the type of test being performed. Integration testing for initial SAP implementations and regression testing for major system upgrades are likely to consume the largest number of resources. Testing resources can come from the following sources: configuration team, end users, test team, subject matter experts (SMEs), production team, and outsourcing organization. The identification of resources for various SAP tests is highlighted in Exhibit 9.1. In this exhibit, individuals with the role of activity owner (AO) comprise the bulk of the resources for a testing effort.

226

A

Production

S,O

R

S, O

O

A

A

AO

Integration

AO

R

S

A

F

Regression

S

AO

AO

A

A

Performance

AO

Technical

AO—Activity Owner, R—Reviews, S—Signs-off and approves, A—Active (i.e., creates test script), F—Facilitates, O—Observes

Levels of participation:

Matrix of tasks and roles

F

SMEs

End Users

Basis

A

F

F, R

S, R

Test

AO

Development

AO

AO

Configuration

Scenario

Development

Unit

R

AO, S

F

A

A

UAT

11:13 AM

Project Team

2/5/07

Test Effort

EXHIBIT 9.1 Roles of Individual Resources to Support Various SAP Testing Efforts

09_4782 Page 226

09_4782

2/5/07

11:13 AM

Page 227

Establishing and Maintaining Testing Resources

227

Unit Testing Unit testing is the first type of testing that occurs for initial SAP implementations at the SAP transaction level or when system changes are introduced for the first time in the production environment. Members from the SAP configuration team will execute, design, and plan unit test cases. Alternatively, members of the production support team will conduct and plan unit testing for system changes that the production team initiated and implemented in production as a result of a help desk ticket or the application of OSS (On-line Service System) notes. For SAP transactions that require testing with multiple sets of data (e.g., test for multiple wage types), members from the test team can execute testing with automated test cases after the configuration and production team members have successfully validated the unit test case conditions and no defects remain outstanding.

Scenario Testing Scenario testing builds on the unit-testing effort and includes testing of two or more SAP transactions comprising a process within a single module. Members of the configuration, production, and test teams can plan, design, and execute the scenario tests to test the system functionality and security roles for the scenario. SMEs can approve and sign off test results. Members of the development team can test interfaces constructed with Batch Data Communication (BDC) sessions.

Development Testing Development testing includes testing the following development objects: reports, interfaces, conversions, enhancements (user exits), forms, work flow, and batch scheduled jobs. Development team members’ background is typically in advanced business application programming (ABAP, SAP’s native and proprietary programming language). Members from the development team plan and execute development tests whereas members from the configuration team can approve the results from the development test. Test team members

09_4782

2/5/07

11:13 AM

Page 228

228

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

can audit test results. Legacy system owners also assist the development team in identifying data sets for interfaces and conversions for the development test.

Integration Testing Integration testing builds on the development and scenario test cases to form end-to-end scenarios. This level of testing requires involvement and support from most of the project members. The test team members can perform execution of integrated test cases either manually or with automated test tools. Configuration and development team members resolve defects, plan, design, and execute the test cases. SMEs can sign off and certify test results. Legacy system owners also assist the development team in identifying data sets for interfaces and conversions for the integration test. End users can observe the execution of integration test cases.

Performance Testing Performance testing ensures that the system has optimal response times and does not experience degradation points or bottlenecks. The test team is responsible for planning, designing, and executing the automated test cases for a performance test case while interpreting and reporting the results for the performance test. The SAP basis and other technical teams are responsible for monitoring the system during a performance test and resolving performance-based problems. The development and configuration team members play support roles in a performance test for launching manual jobs, identifying test data, and documenting test cases.

User Acceptance Testing UAT allows end users to test the application from the point of view of how they will interact with the application in the production environment. UAT can consist of reexecuting previously executed integration test cases with end users, demonstrations of the application,

09_4782

2/5/07

11:13 AM

Page 229

Establishing and Maintaining Testing Resources

229

or the creation of new test cases. End users plan, design, and execute UAT test cases. Configuration and development team members resolve defects that arise from UAT.

Technical Tests Members from the archiving, database, and basis team perform technical tests such as backup and recovery, connectivity tests, and reliability tests.

Regression Testing Regression testing is conducted to ensure that previously working system functionality is not adversely affected by the introduction of new system changes. Production support teams can plan, design, and execute regression test cases. Test team members internally within the project or from an outsourced agreement can augment regression testing with the automation of test cases. End users can approve and sign off results from a regression test.

09_4782

2/5/07

11:13 AM

Page 230

10_4782

2/5/07

11:15 AM

Page 231

CHAPTER

10

Planning and Construction of Test Cases he construction and execution of test cases verify and validate the captured in-scope requirements or the request for new system changes. Test cases contain test conditions, test data, and expected results to verify the design, functionality, and development of SAP advanced business application programming (ABAP) objects, security settings, segregation of duties, work flow, data archiving, business warehouse (BW) reports, and service-level agreements (SLAs) for system performance. A combination of test cases forms a test scenario, whereas a test script is the sequence of actions that executes a test case. Test cases can be designed in spreadsheets, text editors, or within test management tools. Project deliverables such as business process procedures (BPPs), flow process diagrams, and functional and technical specifications are excellent sources of information for designing test cases. The most effective test cases are those that can be executed by nonfunctional SAP experts independently without the assistance of business analysts or SAP consultants. Templates and guidelines are hereby provided to build robust test cases. Well-written test cases exhibit the following characteristics and attributes:

T

■ ■

■ ■ ■

Can be reused for user acceptance testing (UAT) with minimal or no modifications. Trace back to testable requirements or other existing documentation (BPPs, flow process diagrams, technical or functional specifications). Include preconditions. Have been peer reviewed and signed off. Include SAP role(s) to be used for verifying the test conditions. 231

10_4782

2/5/07

11:15 AM

Page 232

232 ■ ■ ■

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Include a narrative or description of the test conditions to be verified. Contain valid test data (i.e., master data). Have been rehearsed prior to approval.

Test cases that meet the aforementioned characteristics fulfill two major expectations in an SAP implementation: (1) reuse or repeatability for future testing cycles, and (2) provide comprehensive details for test case automation with automated test tools. Test cases to be reused for a future testing cycle will need evaluation and maintenance for possible modification. Projects that have version control and test management tools can maintain audit trails and history logs for the actions and modifications performed on test cases. A test case written for a single SAP transaction code can be combined with test cases for other transaction codes to form test cases for end-to-end test scenarios. Test scenarios vary in complexity since they may involve the staging and preparation of data, user exits, and execution through legacy systems.

BUILDING A TEST CASE In order to build a test case it is necessary to know what requirement or test conditions are fulfilled with the execution of the test case. The execution of a test case will need to trace to at least one requirement; otherwise, it calls into question the validity or merits of the test case. In turn, a requirement for which a test case cannot be constructed calls into question the validity of the requirement. After the underlying requirement(s) that trace to the execution of the test case are identified, it is necessary to construct and design the test case template. A test case template can be used for different testing efforts such as functional, development, performance, security, and regression testing. SAP implementation methodologies such as IBM’s Ascendant or SAP’s ASAP Roadmap methodology offer test case templates that can be recycled “as-is” or modified for testing SAP. In the absence of a SAP methodology that offers test case template, it will be necessary for the test team to create test case templates in-house. At a minimum, the construction of a test case should include a data dictionary that clearly defines each field to be populated, the team or individual

10_4782

2/5/07

11:15 AM

Page 233

Planning and Construction of Test Cases

233

responsible for populating the fields on the test case template, and the attributes of the fields for the test case template. Exhibit 10.1 shows a customized test case template from the Ascendant and ASAP Roadmap methodologies. Exhibit 10.2 shows a partial data dictionary and instructions for populating the test case shown in Exhibit 10.1. A test case template can be constructed in spreadsheets, text editors, or directly within a test management tool. Depending on the test management tool being used, it is possible to transfer with macros or other utilities the contents of a test case designed in a spreadsheet or text editors into a test management tool. For initial SAP implementations the configuration, functional, business analysts, and subject matter experts play a role in documenting and peer reviewing the fields on the test case templates. For existing SAP implementations, the test team, assigned testers, and business analysts perform the activities of updating and modifying the contents of the test cases templates. Documented test cases can be approved and signed off prior to test execution. Testing cycles that include test readiness review (TRR) criteria can be included to ensure that all test cases have been peer reviewed, and signed off prior to test execution. The quality assurance (QA) team (if one is present) also can play a role in ensuring that the documented process and methodology for designing and completing a test case is enforced and adhered to. The authors of test cases must assume that individuals other than the original authors of the test cases will execute the test cases to support future tests such as user acceptance testing (UAT) or regression testing. Hence, it is important that all test cases have expected results, preconditions, valid test data, and conditions to be verified during testing. The constructed test cases must be written with sufficient details to allow non-SAP-knowledgeable individuals to execute them.

LEVERAGING INFORMATION SAP testers can leverage the following sources of information or work products for constructing test cases and populating the fields shown in Exhibits 10.1 and 10.2: ■ ■

Business process procedures (BPPs) Flow process diagrams

234 Reference Documents (i.e., BPPs, Specs, etc.):

Testing Level/Phase:

3

2

1

Number

Run #

Detailed Scenario Description Mapped Requirements (CRs)

Section 2: Scenario Description for Test Cases

Execution Date:

Test/Data Preparation:

Inter Test Case Dependencies:

Pre-Condition(s)

Post-Condition(s)

Approver(s) for test results:

Date Reviewed:

Test Case Author(s):

Tester(s) Executing:

Reviewed by:

Priority:

11:15 AM

Description of Test Conditions:

Test Script Design Status: Test Script Execution Status:

Test Script Title: Test Script Method:

2/5/07

Section 1: Header Data

EXHIBIT 10.1 Hybrid Test Case Created from the ASAP Roadmap and SAP Ascendant Methodologies

10_4782 Page 234

SAP Role

Business Process Step, with Data

T-Code

Output Data/ Expected Result

Actual Result Pass/Fail

Defect#

2/5/07

No.

TRANSACTIONAL STEPS—CASE: 1

Section 3: Test Script Documentation

10_4782 11:15 AM Page 235

235

10_4782

2/5/07

11:15 AM

Page 236

236

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBIT 10.2 Partial Data Dictionary and Instructions for Populating a Test Case Template Field Name

Characteristic

Content

Description of test conditions

Mandatory

A high level explanation or summary of the process(es)/scenario(s)/business case(s) to be tested and what prompted the testing of this process (i.e., process impacted by change request, verifies a change request, etc.).

Test method

Optional

Manual or automated.

Reviewed by

Optional

Provide the name(s) of the stakeholders reviewing and approving the test case design.

Test data preparation

Optional

Any data that requires staging or preparation in order to execute the entire test script which can consist of multiple test cases.

Inter–test case dependencies

Optional

List any test cases that must be executed or completed successfully prior to the execution of the test script.

Reference documents

Optional

List any documentation such as technical or functional specifications, BPPs, flow process diagrams, and process design documents that map to the test case (scenario/process) being tested.

■ ■ ■ ■

Technical and functional specifications Process design documents Requirements Customer input (CI) templates

The primary purpose for a BPP is for training; however, the SAP ASAP Roadmap methodology offers an accelerator for the BPP template that can be enhanced to include test conditions that makes it suitable for the execution of unit test cases. BPPs with test conditions used for unit testing can be strung together to create larger test cases

10_4782

2/5/07

11:15 AM

Page 237

Planning and Construction of Test Cases

237

for scenario and end-to-end testing. Larger SAP scenarios can include test conditions for SAP work flow, migrated data from legacy systems, user exits, interfaces, converted data, and security roles. Flow process diagrams based on Unified Modeling Language (UML) notation such as use cases are effective techniques for constructing test cases, since they reveal the expected interaction and tasks of end users with the system. Diagrams representing user tasks or larger end-to-end processes must include notations that describe the assumptions, constraints, high-level description, user roles, preconditions, postconditions, and business priority for the described process. UML use-cases diagrams are suitable for providing such notation. Furthermore, it is necessary to maintain all relationships and diagrams, since it is possible that a diagram for a single process forms part of a larger end-to-end scenario. Flow process diagrams provide a high-level description of the process to be tested, which can help the resource assigned to creating test cases document the test case template. Functional and technical specifications provide the logic for configuring the system, creating report, interface, conversion, enhancement, work flow, and form (RICEWF) objects. Furthermore, SAP configuration settings and changes made to the production system such as new business logic, validation rules, and radio buttons also need to be documented when maintaining the SAP production system in order to allow proper testing coverage and documentation of test conditions for future testing cycles. As an example of the value of technical specifications assigned, testing resources can leverage technical specifications to create test conditions and document test cases for validating interfaces for the following parameters: business rules for converting all data and field mappings, number of records to be extracted, how the interfaces are batch scheduled, and data reconciliation between source and target systems. Requirements are critical because they tell “what the system will do,” and thus the test team must develop test cases to ensure that all requirements are validated and verified. Test cases and requirements are tightly intertwined since a test case can demonstrate that a captured requirement is not suitable for testing or implementation, and requirements provide the basis for constructing a requirements traceability matrix (RTM). Assigned testing resources must develop and design test cases that have test steps that successfully validate the design of the implemented requirement.

10_4782

2/5/07

238

11:15 AM

Page 238

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

In addition to leveraging the aforementioned sources of information, assigned testing resources must review “as is” documentation from the legacy systems to be replaced by SAP, company business rules, project’s scope statement, key performance parameters, and industry standards to ensure that the documented test cases contain sufficient information for validating all expected end-to-end business scenarios and business requirements. Peer reviewing of test cases is an effective technique for eliminating or modifying inadequate and ambiguous test cases prior to the start of test execution.

PEER REVIEWING TEST CASES Test cases can be written with multiple authors or in the absence of any clearly identified quality assurance (QA) guidelines, which often results in test cases whose contents are suspect or questionable since they may not offer valid testing conditions. A peer review consists of the following activities: ■











Members from the QA team ensure that the test case follows established documentation guidelines and standards (i.e., no fields are left blank). Business analysts (BAs), subject matter experts (SMEs), and testers review the test case to ensure that the test case when executed fulfills the intent of the covered testable requirement. Independent or third-party testers review the documented preconditions and test steps of the test case and execute the test case to determine how clear the test case documentation is and provide feedback on any areas of the test case that are missing details or are ambiguous to execute. Designated project members or client requesting SAP services from the SAP system integrator review and approve all test cases prior to test case execution. Different members from the same business area will peer-review each other’s test cases (i.e., a member from the sales and distribution [S&D] module creates a test case and a different team member from the S&D team will peer review the case and provide feedback). End users participating in user acceptance tests (UATs) will reexecute previously executed test cases as part of UAT testing can

10_4782

2/5/07

11:15 AM

Page 239

Planning and Construction of Test Cases



239

provide feedback on the construction and contents of the test cases. Technical resources with knowledge of test tools that are expected to automate business processes from documented test cases can provide feedback on the test cases if test steps are missing or the testable conditions are unclear.

The test manager must allocate time for peer-reviewing test cases as part of the project schedule and the test case planning effort. Furthermore, the test manager must document procedures and guidelines for determining the number of test cases to be reviewed (sample testing), who conducts the peer reviews, what forms are used for providing peer-review feedback, which test cases must be peer-reviewed (i.e., peer-review test cases with a business criticality of high), and how disputes over the feedback for a peer review will be addressed. After peer reviews are applied to test cases, it is good practice to record lessons learned from the process and identify any deficiencies or strengths associated with the peer-review effort.

METRICS FOR PLANNING TEST CASES Peer reviews assist in improving the quality of the contents of the test cases but in the course of peer reviewing and constructing test cases it is necessary to collect metrics. Building and constructing test cases imply that at defined intervals progress is measured for the number of completed test cases. But counting the number of constructed test cases is only one of many metrics to be gathered in the planning of a test case. The SAP project and test manager also have a vested interest in collecting the following (or similar metrics) when planning to create a test case: ■ ■ ■ ■ ■ ■ ■

Number of approved test cases. Number of test cases that have been created, completed, started, in-progress. Number of hours spent in constructing a test case. Number of requirements to which a test case traces. Average number of transaction codes per test case. Number of test steps per test case. Which teams have the most test cases.

10_4782

2/5/07

11:15 AM

Page 240

240 ■ ■ ■ ■ ■ ■

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Which individuals have been assigned to construct the most test cases. Which test cases with a complexity level of “high” have been completed. Average time to document and construct a test case. Trend of completed test cases over a five-week period. Total number of hours allocated to the construction of test cases. Number of hours on average spent in modifying a test script.

Gathering and collecting these metrics is straightforward in test management tools that provide reporting capabilities. In contrast, collecting these or similar metrics from disconnected spreadsheets in multiple shared drives proves to be a daunting and time-consuming task. These metrics assist in estimating the costs for future testing cycles and identifying areas of deficiencies associated with the creation of test cases.

MAINTAINING TEST CASES SAP projects are constantly undergoing changes and evolutions from upgrades to adding new modules or system enhancements, which can cause the underlying system design and implementation to change from what was originally captured as system requirements during blueprint phases or previous gap analysis. All these system changes can cause previously designed test cases to become obsolete or unfit for test execution purposes. Furthermore, the level of detail for test cases can vary widely if the test cases are not peer reviewed or if the QA did not enforce standards. SAP projects may find that test cases need to be updated for the following reasons: ■ ■ ■

Test cases do not contain enough information for validating the system requirements. Test cases do not have sufficient details for validating system functionality at the back-end level. The test data in the test cases has become obsolete and thus does not reflect the system’s baseline or configuration settings.

10_4782

2/5/07

11:15 AM

Page 241

Planning and Construction of Test Cases ■





■ ■



241

Requirements have evolved. The requirements have changed but were not properly documented and thus the contents of the test cases do not reflect the scope of the requirements. The test cases are written at a high level and thus do not lend themselves well for test case automation (i.e., test case will say “Create a sales order and a delivery” but not state how these two objects are created in SAP or what conditions need to be verified). The test case has multiple authors who did not adhere to the same standards for documenting the test case and the test case verifies functionality for a large end-to-end process involving multiple SAP transaction codes, security authorizations, and work flow. The test case was never approved or peer reviewed prior to test case execution. Resource turnover: A consultant knowledgeable in a business area or SAP module who constructed and designed a test case may exit the project without a formal transition or knowledge sharing for the next person who will “inherit” ownership of the test case. Ownership of a test case for maintenance and modifications becomes fuzzy when there is a test case that expands across multiple business areas and there are multiple individuals who contributed to the creation of the test case. Without established ownership for a test case it is unclear which individuals are responsible for updating and maintaining the test case.

Since test cases are subject to frequent modifications, it is imperative to store the test cases, in a single secured repository with audit trails and version management. In the absence of a single repository for storing and managing test cases, a given SAP implementation increases its likelihood of having the contents of the test cases become obsolete and irrelevant since the test cases may get stored in multiple shared drives or different folders and thus there is no central location for managing all the test cases. For these reasons, it is strongly recommended that test cases be maintained and updated in a test management tool whenever requirements change, and when new system changes are proposed and accepted for the production environment.

10_4782

2/5/07

11:15 AM

Page 242

11_4782

2/5/07

11:17 AM

Page 243

CHAPTER

11

Capacity Testing ecent industry data suggests that large companies implementing SAP may experience losses of millions of dollars per hour or minute for each occurrence of downtime in the SAP production system. Fortunately, well-planned and -executed capacity tests are a proven method to help minimize unexpected SAP downtime. Capacity testing is a broad name for tests that ensure that the SAP system meets established response times and can support the expected total number of concurrent end users with optimal system response times. The main types of capacity tests include performance, volume, stress, and load testing. Each type of test plays a critical role in finetuning the SAP system, discovering degradation points, and eliminating system bottlenecks. Furthermore, appropriate execution of each capacity team can help the SAP system achieve the following objectives:

R

■ ■ ■ ■ ■ ■

Verify that service-level agreements (SLAs) are maintained. Ensure optimal software configuration settings. Avoid overspending on hardware equipment. Ensure that the system does not crash or fail given surges in volume from seasonality. Avoid financial losses from system failure in a production environment. Reduce the number of end-user complaints reported to the help desk production support team.

Capacity testing is an ongoing activity throughout the life cycle of an SAP implementation. Many SAP projects conduct a capacity test once a year, which exposes the business to the risk of failure in the production system. Given the constant changes that existing SAP implementations undergo on a daily basis from the introduction of hot 243

11_4782

2/5/07

11:17 AM

244

Page 244

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

packs, configuration changes, new development interfaces, and addition of modules, it is necessary to routinely monitor the system performance and conduct capacity tests to ensure that the SAP changes do not adversely affect system response time. Every SAP production change introduces a risk to the established SLAs and can cause system downtime. New SAP implementations also require the execution of capacity tests before a go-live. The SAP Roadmap methodology within the SAP platform Solution Manager has defined activities and accelerator templates for conducting volume and stress tests during the final preparation phase. Capacity tests are subject to much planning, which must be completed well in advance of a system go-live or cutover. The most practical method for conducting capacity tests is with automated test tools as these provide repeatability and results in the form of charts, graphs, and reports to help interpret and identify the causes of system degradation.

TEST PLANNING Capacity tests are needed for initial SAP implementations, planned upgrades, and as part of production support. A capacity test for a major SAP upgrade or initial SAP implementation should be planned six months in advance of the actual expected date for the execution of the test. Identifying which event triggers the need to perform a capacity test is the initial activity that helps to determine the objectives, the design, and level of support needed to conduct a capacity test. In Exhibit 11.1 a spoked wheel shows the various events within a typical SAP project that may prompt a capacity test. For instance, an existing SAP implementation that is in the midst of an SAP GUI upgrade may need to conduct a performance benchmark test to verify that the established response times for both custom and out-ofthe-box SAP transactions does not deteriorate or degrade after the GUI is upgraded. Discerning the event that prompts a capacity test is the initial step in planning a capacity test. It is necessary to note that multiple simultaneous events can trigger distinct capacity tests. For example, a project may simultaneously upgrade the SAP graphical user interface (GUI), add a new SAP module, include new advanced business appli-

2/5/07

11:17 AM

Page 245

245

s

pg rad

e

as tab da ng lyi er st ou

nd

)

ge

ge

Iu

s ge

., (i.e

rs rve se

an

an

an

ch

GU

are rdw

Ch

Complaints from end users

ion rat

gu nfi

Co

Ne w

es

Capacity Testing

ch

Ha

New SAP module/bolt-on added

Initial implementation d

GUI reconfigured

WA N N/ LA ed

gra d

ed

dd

ta

ni

/u

on

si

vi

w Ne

int

AB

di

AP

ar

ew

es

ac erf

N

de

d ea

Up

11_4782

EXHIBIT 11.1 Events that Trigger a Capacity Test in an SAP Environment

cation programming (ABAP) interfaces, and add new SAP end users from a different corporate division, which can cause the SAP project team to conduct performance, volume, load, soak, and stress testing on the SAP system. Once the SAP project recognizes that at least one form of a capacity test is necessary, a decision must be made to determine which specific type of capacity test must be executed. Understanding the events that trigger the capacity test and the specific types of capacity tests that need to be performed helps determine the scope, objectives, and goals of the capacity test. Exhibit 11.2 provides examples and the objectives for various capacity tests. It shows that a stress test can help a project establish optimal hardware settings for an initial SAP implementation and also address sporadic surges and spikes in system traffic attributed to seasonality, marketing programs that drive up company sales, or random events that cause SAP system traffic to increase unpredictably. However, a project may perform a load test to verify established SLAs.

11_4782

2/5/07

11:17 AM

Page 246

246

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBIT 11.2 Different Types of Capacity Tests to Meet Different Testing Objectives Type of Test

Example

Purpose

Performance

What is the response time for time critical transactions and processes under a low load?

Benchmarking.

Stress

What is the maximum number of users that the system can hold?

Minimize downtime during seasonality.

Load

Under a load of 50 users in addition to batch jobs running in the background, the maximum response time for any transaction will not exceed 1 minute per screen 95% of the time.

Confirm an SLA.

Volume

What is the CPU utilization and disk activity for a peak system load? for all batch background processes?

Validate expected system throughput.

Network

How do established response times for critical transactions degrade when 50% of available bandwidth is consumed?

Assess any latency problems for remote locations. Assess bandwidth. Assess network traffic bottlenecks.

Soak

What happens to SAP buffers after the system has gone live for four months?

Assess memory leaks. Assess database problems that create system stalling. Throttle the system.

Identifying the need for a specific type of capacity test prompts the SAP project team to assign an owner for the planning, design, execution, and analyzing the test results. Typically, the SAP project or its system integrator will initially assign ownership for such a test to the SAP Basis group or the test team if one exists. The owner(s) of the test develops a capacity test plan that specifies at a minimum what type of test will be conducted, risks and mitigation strategy, a schedule with planned activities, processes to be tested, monitoring support for the test, available resources for the test, and major objectives to

11_4782

2/5/07

11:17 AM

Capacity Testing

Page 247

247

be accomplished. For SAP clients utilizing the Solution Manager platform, accelerators in the form of templates are offered for planning stress and volume tests. In parallel with the writing of the capacity test plan, the requirements for the test are identified. Capacity requirements are identified based on the expected SAP business throughput, which generates system traffic and total population of production end users. Initial SAP planning questionnaires such as the one in Exhibit 11.3(a) provide assistance in identifying system information and potential sources of traffic for an SAP implementation. SAP throughput is captured with sizing documents or SAP user community profiles. Exhibit 11.3(b) is a sample template for capturing SAP throughput. Once SAP throughput is captured, requirements are defined and subsequently turned into SLAs. For existing SAP implementations, SAP transaction code ST03 provides historical usage data. For new implementations, SAP throughput can come from system traffic from the legacy systems that SAP will replace. SAP throughput is captured with the input and feedback from several project stakeholders. Stakeholders in a capacity test include database administrators (DBAs), infrastructure engineers, middleware engineers, subject matter experts (SMEs), developers, business owners, configuration experts, and so on. The following suggestions are offered as criteria factors to help project teams capture expected or existing SAP throughput: ■

Follow the 80/20 rule. After World War II, Italian economist Vilfredo Pareto discovered that, in Italy, 80 percent of the country’s wealth was held by 20 percent of the country’s population, hence he developed the 80/20 rule. Moreover, Pareto’s observations have led to the creation of Pareto charts and have played a pivotal role in statistical process control (SPC). The Pareto analysis in SPC operates in this fashion: 80 percent of problems usually stem from 20 percent of the causes, which leads to the famous phrase “the vital few and the trivial many.” Pareto’s principle is also applicable to all types of capacity testing: stress, performance, volume, load, soak, and so on. When applying the 80/20 rule to capacity testing, one can focus on the 20 percent of the business processes that are likely to cause 80 percent of the entire system traffic. Pareto’s principle allows test

11_4782

2/5/07

11:17 AM

Page 248

248

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBIT 11.3(A) Questionnaire for Gathering Initial Information for an SAP Implementation Project Info Attach a project schedule and/or GANTT chart including testing milestones. Attach a system architecture diagram. Attach the SAP client landscape. Background Application Under Test SAP version for the TEST instance? Type of SAP installation (GUI, HTML, Portals, Netweaver, etc.)? System Reliability What are the availability requirements for the test instance? Facility Is there a test facility? If so, is it a shared facility or dedicated to the testing of SAP? OS Description List the operating system including type and version for the test instance. Additional Sources of Traffic List all SAP add-ons. System Configuration Are there any firewalls? Load balancers? Tools Describe the available automated test tools. ABAP (RICEF) List How many interfaces are in scope? Conversions? ABAP Reports? User exits? In-Scope Functionality List all in-scope modules. Industry Solution Are there any SAP industry-specific solutions?

Transaction Code or Business Process or Object

Shopping cart for orders

Create Sales Order

Create Sales Order

Create Sales Order

Application

SRM-EBP

SD Module

SD Module

CRM Sales Internet System

one sales order per day per user containing on average 5–10 items per order

fifteen repair orders per day per user

thirty consignment orders per day per user

12:01– 24:00

8:00 A.M.– 5:00 P.M.

7:00 A.M.– 6:00 P.M.

9:00 A.M.– 5:00 P.M.

Hours of Operation (i.e., Shifts)

1000

30

60

100

Total Expected Concurrent Users/hr

Online

Online

Online

Online

Online/ Background Process Dependencies

(Continues)

<=5 seconds per screen, <=3 seconds to place order with submit button, < 5 seconds to receive confirmation number– 100% of the time

5 seconds per screen under a peak system load–95% of the time

3 seconds per screen under a peak system load 95% of the time

<=5 seconds per screen–80% of the time

Target Response

11:17 AM

five shopping carts created per hour per user

Throughput

Number of Dialog Steps per Process

2/5/07

Author(s): Team Name:

EXHIBIT 11.3(B) Template to Capture SAP Throughput

11_4782 Page 249

249

250

MRP run programs

Time entry— Web-based

Time entry— Web-based

CJ20N

ABAP

CATS

CATS

PS Module

eight projects per day per user

one time entry per user (exempt and contractor hours) per day during the specified time window

4:00 A.M.– 6:00 A.M.

8:00 A.M.– 5:00 P.M.

4:00 P.M.– 6:00 P.M.

1:00 P.M.– 4:00 P.M.

7:00 A.M.– 8:30 A.M.

Hours of Operation (i.e., Shifts)

20

700 for entire time window

300 for entire time window

N/A

Total Expected Concurrent Users/hr

Online

Online

Online

Background

Online/ Background Process Dependencies

< 6 seconds to log on, <=10 seconds per screen, < 10 seconds to submit– 95% of the time

< 6 seconds to log on, <=5 seconds per screen, < 5 seconds to submit–90% of the time

Program must complete within 1 hour

Target Response

11:17 AM

one time entry per employee (union hours) per day during the specified time window

one batch job per day

Throughput

Number of Dialog Steps per Process

2/5/07

system backup

Transaction Code or Business Process or Object

Application

EXHIBIT 11.3(B) (Continued)

11_4782 Page 250

11_4782

2/5/07

11:17 AM

Capacity Testing



■ ■

Page 251

251

managers to select and identify the business processes that are most likely to cause bottlenecks, performance degradation, and traffic across a software application. Focus on business processes that are subject to fluctuations in demand and seasonality due to causes such as holidays and marketing programs. Select 5 to 10 business processes per SAP application module with a high throughput for dialog steps. Select processes that are executed frequently with a large population of end users.

In SAP projects system traffic or throughput can originate from interface programs, Intermediate Documents (IDOCs), bolt-on systems such as Customer Relationship Management (CRM), Supplier Relationship Management (SRM), and Business Warehouse (BW), and end users interacting with the core R/3 SAP transactions. The amount of traffic each SAP source generates varies based on complexity of the process, frequency with which the process is performed, and expected number of end users for the process. For instance, activities such as timesheet entry via SAP Cross Application Time Sheets (CATS) can affect a large number of end users, whereas processes involving approving invoices, hiring employees, and creating network activities for a project may affect only a handful of end users within the organization. In contrast, activities such as month-end closing, payroll run, and material requirements planning (MRP) runs may be performed at infrequent time intervals with few end users but must be thoroughly tested because they are complex, high in database input/output processing, resource-intensive on the system, and must be completed within a certain time frame to avoid impacting dependent activities. The project team will need to develop requirements for critical, resource-intensive processes affecting multiple individuals that can be validated and measured through testing. Developing well-written requirements will require support and assistance from multiple stakeholders from the basis, configuration, and development teams. Avoid requirements that can be interpreted differently by different individuals such as “make the system as fast as possible.” Requirements like these are ambiguous and cannot be tested.

11_4782

2/5/07

252

11:17 AM

Page 252

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

The following are examples of a poorly written requirement and a well-written requirement: Example 1: Poorly Written Requirement “The system shall handle on average 3,500 sales orders per day with an average of 20 lines per sales order.”

Problems with this requirement: ■ ■





Unknown number of expected users creating sales orders. The desired end-user response times for this process are not stated. Potentially, it may take an end user an hour to create a sales order, which is an unacceptable response time for the business. This requirement also fails to state the medium under which the sales orders are created (i.e., dial-up connections, mobile engines, DSL connections, etc.). Finally, this requirement does not address hours of operations or acceptable success factors for validating the requirement. The client may have a business need to have on average response times of three seconds per screen (with a maximum number of concurrent end users logged on) to create a sales order between peak working hours 9 A.M.–12 P.M., but this business need is not manifested at all as written in this requirement. As written, this requirement can be interpreted differently, and cannot be measured, tested, or validated. Example 2: Well-Written Requirement with Precise Target Goals Specified for Validating the Requirement “For the SAP CRM sales Internet system under a 500-hourly user load, sales orders static pages will display in under x seconds, dynamic pages in under y seconds, and search/order pages in under z seconds, 95 percent of the time during working hours (7 A.M.–7 P.M.) with no errors when accessed via corporate local area network (LAN) via Browser X and with each sales order containing between 20 and 30 line items.”

Reasons why this is a well-written requirement: ■ ■

Gives specific usage of the application under test for the intended production end users that can be measured and tested. Provides the expected number of concurrent end users during the peak hours of operation for a given workday.

11_4782

2/5/07

11:17 AM

Page 253

Capacity Testing ■ ■ ■ ■

253

Provides the expected size of a CRM SAP sales order in the target production environment (20 to 30 line items). States the type of speed for connecting to the Internet and the specific Internet browser. Has success factor for validating the requirement (95 percent). Gives specific threshold values that need to be validated for various Internet pages for the SAP CRM application.

Well-written requirements are consequently converted into SLAs. SLAs establish goals and objectives that the system must conform to after the system is deployed into the production environment. SLAs need to be tested and confirmed in conjunction with the other sources of system traffic prior to system deployment. They also need to be monitored constantly in the live production environment. SAP’s tools and services such as Early Watch and CCMS help monitor the SAP system. Non-SAP tools for monitoring the system performance include SiteScope, Luminate, and BMC Patrol. SLAs are initially verified prior to go-live through the execution of test cases in a production-sized environment. Therefore, it is critical that the design of test cases represents the behavior of the production end users. The test environment for a capacity test should closely resemble the intended target production system in size, database connections, database size, hardware, configuration settings, and interfaces to external systems. Attempts to extrapolate capacitytesting results from an SAP instance that is not production-sized or representative of a production environment is usually a fruitless exercise, since performance is not linear across environments.

TEST DESIGN AND CONSTRUCTION The construction of SAP test cases for a capacity test can be achieved with an automated test tool such as Mercury Interactive’s Loadrunner or manually with spreadsheets or text editors. Given time constraints and project deadlines, automated test tools are recommended as the de facto solution for designing, executing, and analyzing the results of a capacity test. The rationale for utilizing an automated testing approach over a strictly manual approach is provided in Exhibit 11.4. However, the underlying reason for conducting an automated capacity test is that

254

Executing processes with test scripts that require little or no human intervention

Repeatability, consistent Allows data creation for scripts that require data seeding Can be used for nontesting activities such as data loading for training Collects automatic statistics, reports, metrics Allows hundreds of end users to be deployed across a single machine Allows testing on “remote” sites Can be “scheduled” to run without human intervention Allows control over execution of test scripts Allows for flexible data access method—allocation allows “apples to apples” test Includes SAP monitors for troubleshooting

Expense Training Test tool may not recognize objects/applications within your infrastructure Requires installation, validation against SAP system

Concept

Benefits

Drawbacks

Not easily repeatable Requires a lot of coordination Requires many PCs to deploy to end users Difficult to synchronize end users for keystrokes Reports and graphs not produced Expensive (man-hours) Takes resources away from “primary” job functions

11:17 AM

Little or no training involved No installation of test tools Makes test tool expert irrelevant

Executing test scripts with keystrokes from human beings and collecting results with watches

Manual

2/5/07

Automated

EXHIBIT 11.4 Benefits of Automated Testing over Manual Testing for a Capacity Test

11_4782 Page 254

11_4782

2/5/07

11:17 AM

Capacity Testing

Page 255

255

most SAP projects find it an intractable challenge to coordinate hundreds or even thousands of end users in a single room to execute manual test cases over an extended period of time to optimize SAP system performance. While an automated approach is recommended, it is important to note that manual intervention during a capacity test will be needed to launch manual batch jobs, interfaces, ABAP reports, and other non-SAP applications that send traffic into the system through remote function calls (RFCs), intermediate documents (IDOCs), and so on. Hence, a mixture of automated and manual testing will provide greater flexibility for projects conducting a capacity test. Independent of the approach (automated or manual) for executing a capacity test, test cases representing the system throughput need to be documented. Test cases can be recycled from previous testing efforts but must include test steps that are representative of how the process will be executed in a production environment. Test cases from previous efforts may need modification to include larger sets of data for processes that need to be executed multiple times or processes that consume data. For instance, a process that creates SAP shipments may exhaust the existing number of sales orders in the system, or the creation of outbound deliveries may deplete the inventory from a warehouse. Since a capacity test for a stress or load test may need to be repeated or iterated multiple times during a given time frame, it is important to have documented test cases that identify all the necessary data sets to execute the business process or contain the necessary test steps to build self-feeding test cases that replenish consumed data after every iteration. Documented test cases that represent the production throughput need to represent stable system functionality. The underlying assumption for a capacity test is that the system’s functionality has been previously tested, verified, and is stable and frozen. For example, the objective for a capacity test may be to determine system response times under a peak system load of concurrent users for creating SAP deliveries as opposed to determining whether the SAP deliveries process functions correctly. Documented test cases are the cornerstone for designing automated test scripts for projects that elect automated test tools for executing a capacity test. With an automated test tool, a process can be recorded, played back with multiple sets of data, and executed with

11_4782

2/5/07

11:17 AM

Page 256

256

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

N number of emulated end users. For instance, an automated test tool may record SAP’s time entry process for CATS and allow playback of time entry with hundreds or thousands of emulated end users. Automated test tools allow for repeatability when processes need to be executed multiple times in order to troubleshoot a system or identify bottlenecks or system degradation points, which cannot be easily accomplished with a large number of end users pressing keystrokes at the same time. Exhibit 11.5 shows how an automated test tool sends traffic to different machines with emulated end users to create traffic in an SAP environment and results are generated. The primary benefits associated with automated load testing tools include: ■ ■ ■ ■ ■

Emulate, from a single point, any number of users and their impact on the system. Perform large-scale tests with minimal hardware resources. Adjust volume levels through dial-up/dial-down virtual users. Create repeatable test scripts. Find and correct scalability problems early in the development process.

Results logs

Directs automated test cases

SAP system receiving traffic

Machines emulating system traffic in SAP

EXHIBIT 11.5 Automated Test Tool Simulating System Load in an SAP Environment

11_4782

2/5/07

11:17 AM

Capacity Testing ■ ■ ■ ■ ■ ■

Page 257

257

Correlate poor response times with virtual user levels. View or print test run history for each test or group of tests executed. Measure system performance under differing conditions. Replace human beings with emulated end users. Allow system monitoring during a test execution. Allow interpreting of system response data with automatically generated results logs.

When an automated test tool is selected for a capacity test, test cases are recorded to reflect end-user behavior and rate of throughput. In order to increase the accuracy of automated test cases it is recommended that test cases include “think-times” so that test cases play back at a rate that is equivalent to the speed at which an end user executes a transaction. For instance, a recorded process for creation of work orders can take an end user on average five minutes to complete a single work order, whereas the playback of the automated test case for work order creation may take on average 40 seconds to complete for a single work order. In order to compensate for the playback speed of the automated test case, artificial or random think-times may need to be introduced in the automated test case to closely resemble the expected system throughput. Also, for purposes of diagnosing and troubleshooting an automated test case, it is recommended that the automated test cases contain verification points that inspect expected system messages, windows, or attributes. For example, when a user logs on to the system, a welcome message is expected. This message serves as a visual cue that can be verified with an automated test case. Other visual cues applicable for verification include status bar messages, screen titles, information screens, and field attributes. Designing automated test cases that embed logic to recognize visual cues helps to troubleshoot the system when the performance degrades as visual cues facilitate the process of pinpointing at what point the system experiences a choke point. The creators of automated test cases for an SAP capacity test also need to implement logic in the design of the test cases for the following conditions: ■

Optional screens that pop up based on different conditions such as data entered or security privileges.

11_4782

2/5/07

258 ■





■ ■



11:17 AM

Page 258

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Data captured from an SAP screen with an automated test tool to be used for a subsequent recorded process may need to be converted from one format to another. For instance, a captured digit from an SAP screen may look like a number, but it may be captured as a character as opposed to an integer, which prevents the automated test case from playing back correctly. A script that logs on to an SAP application such as Employee SelfService (ESS) or E-hiring may need to randomly substitute the hard-coded value for a machine name with a parameter value to properly test the load balancing application. In Web-based applications for SAP’s CATS, Enterprise Buyer Program (EBP), ESS, and CRM Sales Internet system, it is necessary that emulated end users maintain their Web session ID throughout the navigation of different Web pages before completing a given scenario and exiting the application. Ensuring that emulated end users log off completely from the application, particularly for web-based applications. Self-feeding test cases that consume data for each executed test case iteration and require unique data records for subsequent iterations. Scrolling functionality when populating data into an SAP table or matrix where records need to be inserted in the next available open field independent of its location on the screen.

Automating test cases for a capacity test is a dedicated development effort that requires support and assistance from members from the SAP configuration, basis, and development teams. Additionally, support is expected from SMEs and the project manager to develop automated test cases. The project members play different roles in identifying system throughput, documenting test cases, developing test cases, approving automated test cases, monitoring trial runs, setting up the environment, and identifying data values. After the roles and activities for the project members have been identified to design the automated test cases it will be necessary to develop and debug automated test cases and conduct trial runs. Trial runs representing 20 to 50 percent of all expected system traffic should be executed prior to the actual start date of the formal capacity test for all automated test cases, which can help with the following:

11_4782

2/5/07

11:17 AM

Capacity Testing ■ ■ ■ ■

Page 259

259

Flushing out errors with the automated test cases such as data conflicts. Verifying that the automated test tool has been properly installed on all machines. Verifying log-on IDs and passwords. Ensuring that the system monitors have been properly installed.

Trial runs should be communicated in advance to the rest of the project team to avoid impacting other users logging on to the test environment. Results from trial runs may help to diagnose initial response problems encountered with the application. For instance, initial trial runs may show deficiencies with the load balancing application, the need to create a group log-on, and the need to allocate more disk space to table spaces. After the trial runs are successfully executed, the project team schedules the formal execution of the capacity test. Scheduling and planning the execution of the actual test includes confirming hardware resources and facilities (i.e., a war room), monitoring resources for the test, reserving the test environment, communicating the test dates to the rest of the project team members, and reviewing the risk plan and risk mitigation strategies in the event that the system crashes or fails during the middle of the test. Many projects schedule capacity tests during off hours when the tests will not impact the test environment during working hours and back up the system prior to the start of the testing cycles as a means of reducing the risks associated with a capacity test. Furthermore, to avoid skewing test results some companies deactivate or disable SAP log-on user IDs that are not part of the capacity test to prevent users from logging on to the system during testing hours.

TEST EXECUTION Execution of capacity tests is an iterative approach that may require multiple iterations until the system reaches a system architecture that is optimal and consistent with the established SLAs. Testing with automated test tools allows for infinite playback of test cases as long as the necessary data sets are identified and the test environment has

11_4782

2/5/07

260

11:17 AM

Page 260

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

a frozen design. In contrast, a manual capacity testing with end users offers limited repeatability in the event that test cases have to be executed multiple times. Given the impracticalities of manual execution of a capacity test, the remainder of this chapter is focused on testing of SAP with automated test tools. For initial SAP implementations and major system upgrades, the project may need to allocate four weeks, which may include weekend time for executing the test cases, troubleshooting the system, and gathering and interpreting test results. This four-week time estimate is different from the time allocated to the design of the automated test cases. Prior to the execution of the test the project manager and/or basis team may decide to back up the system. The first day of execution all monitoring groups need to have their monitoring equipment turned on and ready to capture data or system pulses at least 45 minutes prior to the actual start time of the capacity test to capture initial data from the system. (Exhibit 11.6 displays some sample parameters and the tools to monitor them for an SAP implementation with an Oracle database.) Initial system pulses may indicate that the system is experiencing delays, degradation points, or very little activity (under utilization) prior to the start of the test. If initial system pulses manifest or display high system usage or high utilization, it is recommended that the monitoring members investigate the causes behind the high system utilization and eliminate them. For example, high database activity or server utilization for the SAP box could signal to the Basis administrator that individuals not associated with the system are logged on to SAP or that batch jobs are running in the background mode that need to be terminated. Monitoring resources for an SAP capacity test include members from the network (infrastructure) group, DBAs, Basis, server administrators, and development teams. However, participants for the entire capacity test may include members from the monitoring groups in addition to the architects for the solution under test, and development and configuration team members to help troubleshoot and diagnose system problems. Since it is possible that participants for a capacity test are offsite or away from the war room, it is recommended that a dial-in number be provided to all participants to allow them to provide feedback when issues and problems arise during the

11_4782

2/5/07

11:17 AM

Page 261

261

Capacity Testing

EXHBIT 11.6 Sample Parameters to Monitor for an SAP Capacity Test Area

Parameter

Measuring Tool

Database Database Database Database Database Database Database Database Database

CPU used by this session Consistent gets DB block gets Physical reads Top Wait Events Physical reads (lob) Physical reads (direct) Total sorts (memory sorts + disk sorts) Disk sorts/memory sorts

STATSPACK STATSPACK STATSPACK STATSPACK STATSPACK STATSPACK STATSPACK STATSPACK STATSPACK

capacity test. The person in charge of the capacity test needs to ensure that all participants are present or available at least one hour prior to the start of the test. Typically, the person in charge of a capacity test is the person who launches or executes the automated test cases from the war room and generates traffic with emulated end users on different machines. If critical participants are absent for unexpected reasons, the capacity test may be suspended. When all participants have been confirmed, the monitoring tools have been engaged, and the initial system pulses have been successfully gauged, the designer of the automated test tools may start to kick off the automated cases based on order of execution. Automated test cases should be kicked off in gradual fashion to avoid crashing the servers during the log-on process. It is important to note that the capacity test will consist of more than just automated test cases, which may include manual processes such as interfaces, month-end activities, payroll runs, and batch jobs, which will require other users to manually log on to the SAP system as automated test cases are being launched. A well-planned capacity test should offer a calendar or schedule with expected times for starting or launching processes in SAP and the resource responsible for doing so. The first day of testing, the scenarios for the capacity test should increment the system load in increments of 5 to 10 percent of the total load. For instance, if the total expected user load is 1,000 concurrent users excluding manual processes and batch jobs, the person

11_4782

2/5/07

262

11:17 AM

Page 262

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

responsible for kicking off the automated test cases may launch the first 100 emulated end users for the first 10 minutes of the test and take a snapshot or pulse of the system response times. Then assuming the system response time is adequate, kick off a batch of another 100 emulated users for a total of 200 emulated users and so on every 10 minutes until all 1,000 concurrent emulated end users are logged on. The actual testing calendar would show the expected ramp-up of emulated end users in addition to the execution of manual processes. Using the previous example of 1,000 concurrent users, the bar-coding team members launch a job sending IDOCs into SAP after the first 300 concurrent emulated users are logged on whereas, the supply chain team members may launch an MRP run after the first 400 hours emulated end users are logged on in addition to other background jobs. The first day of execution may demonstrate that the productionsized or actual production environment may not adequately handle a moderate or even light load of emulated users when the capacity test includes batch jobs, SAP CATT (Computer Aided Testing Tool) scripts, data from external components, and IDOCs, which may cause the test to be suspended. Under these circumstances where the system locks up, more emulated end users cannot successfully log on, or the performance degrades beyond an acceptable threshold level, it is critical to annotate any reported errors or capture errors with screen shot printouts. Screen shots are particularly useful for troubleshooting Web-based systems, which may produce cryptic runtime or Java errors when the performance degrades. System errors during the test runs should be reported immediately when first observed to all monitoring parties on the conference call or in the war room to help pinpoint the cause of the problem. Additionally, the person who first monitors the error can send screenshot printouts of all observed system problems to all monitoring groups. Often, the initial responses from monitoring resources for a system error are that the automated test tool has a problem, or that the automated test cases were recorded incorrectly but, when confronted with evidence of the system problem through screen shot printouts, the monitoring resources are more likely to recognize the merits of the system error and the system limitations. Another good technique to overcome skepticism of a system error from a monitoring group during a capacity test is to have the monitoring group log on to the

11_4782

2/5/07

11:17 AM

Capacity Testing

Page 263

263

application, which may produce an error in and of itself or may show that the system response times are inadequately slow when the user attempts to execute a process. Depending on the type of problem observed during the execution of the capacity test, the monitoring groups, configuration experts, or development team members can make adjustments to the system during the same day of the test to allow the testing to continue. If the problem observed in the system has a simple and easy solution to implement, the system may need to be refreshed or reset to allow for the test to resume. However, if the problem proves to be more severe, such as inefficient Structured Query Language (SQL) statements in an ABAP program that require code to be rewritten, or if the system needs to be reconfigured or more hardware needs to be procured to improve the system response times, it may be necessary to suspend the capacity test indefinitely. The complexity of the observed system problem and its expected resolution will dictate how long it takes to resume the capacity test. For problems that allow the test to proceed even though the established SLAs are violated, the test engineer may execute capacity tests on consecutive days to allow the SAP experts to tweak the system on a gradual basis until SLAs are verified. Even the best-planned capacity tests are an inexact science, requiring tweaking and modification of various parameters until a permanent solution is identified. To compensate for flaws in the design of the capacity test or identification of system throughput, it is recommended for initial SAP implementation and major system upgrades that a load 20 to 25 percent greater than the expected peak system load be emulated in the system, which helps increase the confidence level that the system can successfully handle the expected production traffic or even seasonality. At the conclusion of each day when the capacity tests are conducted, all results for the day should be gathered and stored in a central location for safekeeping, analysis, and interpreting at a future date. The test engineer, in addition to other project stakeholders, may create success or exit criteria for concluding a capacity test at the end of each day or for the entire testing effort. For example, a system that is expected to handle a peak load of 1,000 concurrent end users that is proven to successfully handle a load of 1,300 end users while meeting all SLAs after a period of three hours may provide substantial evidence for successfully concluding the load and volume test. However,

11_4782

2/5/07

11:17 AM

264

Page 264

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

for a stress test, a system that is expected to handle a concurrent peak load of 10,000 end users may find that after 105,000 concurrent users logged on for 20 minutes, the system has memory leaks, experiences unacceptable response times, locks up or crashes, which concludes the stress test has met its objective of determining the system’s breaking point. The objective of the capacity test, along with its defined success criteria, can provide a stopping point for the execution of the test.

TEST ANALYSIS Results for a capacity test can be captured and reported at various times during a test. A capacity test may have severe problems after the first day of testing which reveal that all graphs, charts, and reports from the test need to be evaluated and reviewed before the test continues any further. In contrast, capacity tests that reveal minimal problems or system deficiencies may have reports, graphs, and charts that are accumulated over a period of several days which are subject for interpretation after the execution phase has completed. Automated test tools provide evidence that a problem exists with the application but not necessarily what the exact problems are which causes the test engineer along with the other monitoring groups to compare and review several graphs to detect inflection points, system spikes that cause system degradation or failure to conform to SLAs. Test tools produce a phalanx of graphs and charts that in isolation offer little or no help in diagnosing system problems, which in turn may need to be overimposed with other graphs from the automated test tool or from the other monitoring groups. Conversely, the automated tests and graphs from other monitoring groups may provide evidence that the system consistently produces results that are within the specified thresholds for the project. Graphs and charts that reveal a performance problem may prompt the project manager and the person responsible for resolving the problem to discuss the impact of the problem, severity, and course of action. The problem could reveal that certain processes do not meet SLAs by a slight percentage but resolving the problem may delay the expected SAP go-live date at a large cost, which may prove impractical for the project to attempt to resolve. Exhibit 11.7 shows

11_4782

2/5/07

11:17 AM

Capacity Testing

Page 265

265

EXHIBIT 11.7 Common Resolutions for Performance Problems in an SAP Implementation Create a group log-on. Create a data archiving strategy to avoid increasing the size of the database to an unmanageable size. Rewrite and tune SQL (ABAP) code. Increase number of app servers, dialogs. Redo database log files. Respecify buggers (SAP, database). Provide more memory to support more work processes and more processors to support those additional processes. Reconfigure SAP modules. Redistribute load balancing. Allocate more disk space for tables. Create an index on table to avoid full-table scans. Reinstall ITS servers to decrease utilization.

some of the common resolutions for resolving SAP performancebased problems. However, problems reported through the graphs and charts may demonstrate that the system is not ready to go live and that if it does go live despite the reported performance problems the company risks experiencing significant downtime in production and large financial losses. In situations where the performance problem presents a high risk for occurrence with deleterious consequences, the Change Control Board (CCB), the project sponsors, steering committee, and project manager may need to assess the magnitude of the problem and impact on the project’s schedule. The lead tester or the owner of the capacity test is expected to produce a final write-up or project postmortem for the results from a capacity test. The various monitoring entities are expected to provide individual write-ups for the areas that they monitored and the areas that are expected to have performance problems. The documentation for the final analysis write-up includes lessons learned, areas of system deficiencies, verified SLAs, issues and problems encountered during the test, and resolutions for all reported problems. Problems reported for the test should be categorized, ranked, and stored in a defect management tool. The analysis and final phase for a capacity test concludes when results are reported to and accepted by senior management and all defects have a resolution or expected resolution date with an assigned

11_4782

2/5/07

266

11:17 AM

Page 266

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

owner. Although the initial SAP capacity tests can prove that response times are acceptable based on SLAs and the senior manager has accepted the results prior to system go-live, it is important to note that SAP production-based systems are subject to constant changes that may adversely affect system performance or compromise SLAs, which would call for more capacity tests or system monitoring. For entities that are concerned about the SAP system response times after the system has been deployed, SAP Corporation offers monitoring and optimization services to ensure that SLAs are maintained.

12_4782

2/5/07

11:19 AM

Page 267

CHAPTER

12

Test Execution est execution is the phase held after the test strategies, test planning, test script documentation, and test procedures have been designed and developed. A testable software build is now ready to be deployed into the quality assurance (QA) environment. The main objective of test execution is to demonstrate that the actual test results for each test step match the expected test results. Alternatively, test execution may identify that the system configuration does not meet requirements, which causes the test team to log defects as discussed in Chapter 13. Test execution is conducted after the test cases have been designed, documented, peer-reviewed, and approved, and the entrance criteria and/or test readiness review (TRR) have been evaluated. Test execution is conducted with either manual or automated test cases. For initial SAP implementations, test execution is conducted during the realization and final preparation phases. For existing SAP implementations, test execution is routinely held for regression testing that addresses system upgrades, addition of new SAP modules, SAP patches, OSS (Online Service System) notes, SAP hot packs, and production transports. The construction of a test schedule prior to the start of the test execution is highly recommended to identify the sequence of dependencies in which the test cases need to be executed. A test schedule also helps to estimate the total time needed to execute all test cases associated with a testing effort and assign the test cases to the available resources for test execution. A test schedule needs to be developed early on in the software development life cycle and needs to be incorporated into the overall SAP implementation schedule. In order to develop a robust and sound testing schedule, appropriate methods and techniques must be utilized in order to estimate the duration of times necessary to execute each test case while considering the level of expertise of each test team member assigned to execute the test cases.

T

267

12_4782

2/5/07

11:19 AM

268

Page 268

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

After the test schedule is developed and the test cases are executed, testing metrics are collected to monitor the testing progress and to help evaluate the exit and entrance criteria for the next testing cycle. Test execution also includes the storing of test results and test logs. Depending on the industry or contractual obligation under which the SAP system is to be tested, it may be necessary to archive test results and test logs in order to support testing audits and industry regulations.

TEST SCHEDULE Given its integrated modules, SAP processes must be executed in a given sequence to account for dependencies. An example is the need to schedule and identify testing dependencies—one may have to schedule the exeuction of year-end testing at the end of the testing cycle because executing year-end runs at the beginning or middle of the testing cycle may close all profit-and-loss accounts to the balance sheet, which could hinder the testing team from executing any other test cases. Testing schedules should meet the following objectives: ■ ■ ■ ■ ■ ■ ■ ■

Clearly identify all the resources needed to manually execute the test cases. Identify the expected or planned duration times to execute each test case. Facilitate the creation of a testing calendar. Identify all testing dependencies and the appropriate sequence for execution each test case. List all available manual and automated test cases. Identify the status of each test case (i.e., completed, in-progress, etc.). Determine whether the testing execution efforts are behind, ahead of, or on schedule. Schedule the test runs and resources required for them.

Early planning allows for software development and testing schedules and budgets to be estimated, approved, and incorporated into the overall software development plan. Estimates must be continually monitored and compared to actuals, so they can be revised and expectations can be managed as required.

12_4782

2/5/07

11:19 AM

Page 269

Test Execution

269

Often, however, estimates have to be adjusted and test strategies have to be modified to meet a nonmovable deadline. Before a test schedule is developed, it is important to define the testing tasks. If a testing schedule is dictated or imposed on the testing team, it should be made clear what tasks can actually be completed within the dictated time frame to manage expectations accordingly. In contrast, when a testing schedule is not dictated, the test manager is given more time to evaluate the following considerations: ■ ■



Before designing an appropriate testing schedule, the test program tasks have to be clearly defined and scheduled. Once the tasks are understood it is important to know how to estimate schedules and duration times accordingly, and some estimation techniques described here need to be implemented. On a more granular level, test case execution needs to be estimated in addition to all the test program tasks listed (see the section entitled Test Calendar for more detail).

In order to create a testing schedule, however, it is critical to establish the testing program tasks that need to be monitored and evaluated to prevent testing tasks from falling behind schedule. The next section will aid in defining and understanding the test program tasks, which need to be clearly delineated in order to be able to execute a comprehensive test program in which each task has to be laid out in detail to allow for an effective test schedule development.

DEFINE AND UNDERSTAND THE TEST PROGRAM TASKS 1 Exhibit 12.1, Test Program Work Breakdown Structure, reflects examples of the different types of test execution tasks that can be performed on an SAP project to supplement the testing activities outlined and described in the SAP ASAP Roadmap methodology. The tasks in this exhibit are suitable for companies that have SAP in a production

1

Derived from Elfriede Dustin. 1999, Automated Software Testing, Reading, MA: Addison Wesley.

12_4782

2/5/07

11:19 AM

Page 270

270

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBIT 12.1 Test Execution Work Breakdown Structure No.

Work Breakdown Structure (WBS) Element

8

..........

9

Test Execution

9.1

Environment Setup. Develop environment setup scripts. Ensure all necessary data for testing has been loaded into test environment and identified within the test scripts.

9.2

Test Bed Environment. Construct, debug, and troubleshoot automated test scripts.

9.3

Test Phase Execution. Execute the various test phases. Execution of Automated Test cases and manual test cases.

9.4

Test Reporting. Prepare test reports.

9.4

Issue Resolution. Resolve daily issues regarding automated test tool problems. If necessary, contact test tool vendor for support and maintenance of test tools.

9.5

Test Repository Maintenance. Perform test tool database backup/repair.

10

Test Execution and Management Support

10.1

Process Reviews. Perform a test process review to ensure and enforce that standards and defined test processes are adhered to. Generate deficiency reports for non compliance with established and approved standards.

10.2

Test Bed Configuration Management (CM). Maintain the entire test bed/repository (i.e., test data, test procedures and scripts, software problem reports, etc.) within a configuration management tool. Define test script CM process and ensure that test personnel works closely with the CM group to assure test process reusability.

10.3

Test Program Status Reporting. Identify mechanisms for tracking test program progress. Develop periodic reports on test program progress. Reports should reflect estimates to complete tasks in progress (Earned Value Measurements).

10.4

Defect Management. Define Defect tracking workflow. Perform defect tracking and reporting. Attend defect review meetings.

10.5

Metrics Collection & Analysis. Collect and review all metrics to determine whether changes in process are required and to determine whether product is ready to be shipped. .

11

...........................

12_4782

2/5/07

11:19 AM

Test Execution

Page 271

271

environment and want to improve their current testing approach or methodology or for companies undergoing initial SAP implementations. The structure represents a work breakdown structure (WBS) that can be used in conjunction with timekeeping activities to develop a historical record of the test execution effort expended to perform the various activities on projects. The information in a detailed WBS can be used to calculate the Earned Value2 metric that will allow for progress tracking. Test teams may wish to further break down elements in Exhibit 12.1 to delineate all test program activities, such as project startup, early project support, decision to automate test, test tool selection, test strategy development, test procedure/script development, and other such activities according to the various types of tests. For the purpose of this chapter, Exhibit 12.1 provides the WBS sample for test execution only. WBS 9.3 can be broken down further to delineate WBS activities for the various types of tests, which may include functional testing, server performance testing, archiving testing, development testing for report, interface, conversion, enhancement, workflow, and form (RICEWF) objects, scenario testing, integration testing, regression testing, boundary testing (positive and negative), security testing, memory leak testing, and response-time performance testing. Once the test execution tasks are understood, the test manager can start estimating how long it will take to execute the tests. You can also decide which tasks have to be removed, given the limited budgets and schedules, and evaluate associated risks with dropping any of those tasks. When considering a tight schedule, the test manager might also want to evaluate whether putting more people on the project along the schedule’s critical path, which is known as project crashing, will speed things up, while keeping in mind that adding new people to a project that’s in jeopardy rarely ends in success and can in fact increase the project’s costs.

2

Earned Value is a management technique that relates the WBS elements to schedules and to technical cost and schedule requirements.

12_4782

2/5/07

272

11:19 AM

Page 272

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

FACTORS AFFECTING THE TEST EXECUTION SCHEDULE The following list provides the factors that should be evaluated before estimating duration times for the testing schedule: ■









Organization. Culture or test maturity of the organization. An organization that strives to meet CMM (Capability Maturity Model) level 5 requirements will have different expectations of the detailed tasks included in a testing schedule, for example, versus a startup company that is not following any specific processes (the latter is not recommended). For example, a process-centric organization generally understands that schedules cannot be dictated, but have to allow for appropriate estimation and measurements as to what can be feasibly implemented in a specified time frame. Scope of test requirements. Tests that need to be performed can include functional requirement testing, server performance testing, user interface testing, program module performance testing, program module complexity analysis, program code coverage testing, system load performance testing, boundary testing, security testing, memory leak testing, response-time performance testing, and usability testing. Test engineer skill level. This refers to the technical skill level of the individuals performing the test. As defined in Chapter 8, often a mix of testing skills is required to make up an efficient SAP testing team. If a team consists entirely of junior-level inexperienced people, the test execution schedule can drag out much longer than anticipated, because the learning curve can be much too steep. It is recommended to require a mix of testing skills and SAP experience. Test tool proficiency. The use of automated testing introduces a new level of complexity that a project’s test team may not have previously experienced. Test script programming is an expertise required and may be new to the test team, and possibly few on the test team have had experience in performing coding. Even if the test team has had experience with one kind of an automated test tool, the tool required on the new project may be different. Business knowledge. Test team personnel familiarity with the application business area is important. If there is a lack of busi-

12_4782

2/5/07

11:19 AM

Test Execution













Page 273

273

ness knowledge, again the schedule might have to be moved out longer than anticipated. Scope of test program. An effective automated test program amounts to a development effort complete with strategy and goal planning, test requirement definition, analysis, design, and coding. Start of test effort. Test activity and test planning should be initiated early in the project. This means that test engineers need to be involved in analysis and design review activities. These reviews can be used as effective testing techniques, which are essential in preventing analysis/design errors. This involvement allows the test team to more completely understand requirements and design, architect the most appropriate test environment, and generate a more thorough test design. Early involvement not only supports effective test design, which is a critically important activity when utilizing an automated test tool, but also provides early detection of errors and prevents migration of errors from requirement specification to design, and from design into code. Number of incremental software builds planned. Many industry software professionals have a perception that the use of automated test tools makes the software test effort less significant in terms of man-hours, or less complex in terms of planning and execution. Savings accrued from the use of automated test tools will take time to generate. In fact, at the first use of a particular automated test tool by a test team, very little savings may be realized. Savings are realized in subsequent builds of a software application. Process definition. Test team utilization of defined (documented) processes improves efficiency in test engineering operations. Lack of defined processes has the opposite effect and translates to a longer learning curve for junior test engineers. Mission-critical applications. The scope and breadth of testing on software applications, where a software failure poses a risk to human life or where software failure is mission critical to an organization, are greater than for software applications that do not pose a high risk. For example, the performance of software controlling a heart monitor in a hospital setting is more critical than that of game software that entertains people. Test development/execution schedule. Short time frames to perform test development and execution may interject inefficiency in

12_4782

2/5/07

274

11:19 AM

Page 274

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

test engineering operations and require that additional test engineering effort be applied.

TEST EXECUTION CALENDAR In addition to a test schedule, a test execution calendar can be constructed. A test execution calendar provides an alternative view or format to the testing schedule. The testing calendar is not as robust as a formal testing schedule, since it does not contain critical paths and a baseline or show schedule variances or clearly identified testing dependencies, but it does present at a high and simple level all test cases scheduled for test execution. The test calendar may also be easier and less time consuming to maintain than a full-blown test schedule with scheduling software. A test execution calendar can be constructed as an input to building a formal test execution schedule or it can become the by-product of the test schedule. Exhibit 12.2 shows a test execution calendar for an SAP implementation that has to execute multiple test cases over a five-day time window as part of the regression testing cycle. To construct a testing calendar, one needs to determine all test cases that must be executed as part of a testing cycle and the expected duration execution times for each test case. In Chapter 8, techniques and ballpark estimates were provided to design, construct, create, and execute SAP test cases for different testing efforts such as unit, scenario, integration, and so on. In addition to the historical information technique for estimating duration times based on prior testing efforts, another effective technique for estimating duration times is the expert method, whereby individuals knowledgeable of the test case to be executed provide an estimate based on previous work experiences of how long it would take to execute a given process. For instance, the SAP FICO (Finance and Controlling) expert may estimate based on familiarity with the project’s requirements that the test case for month-end testing would take eight hours to execute and verify manually. The test manager can set up meetings with different members of the configuration and development teams to capture duration times for all test cases to be executed and the necessary sequence in which the test cases will be executed. Once the test manager captures this in-

12_4782

2/5/07

11:19 AM

Page 275

275

Test Execution

July '06

M

SAP Upgrade– Regression Test

07/10

T

8:30 A.M.

TH

07/11 07/12 07/13 Test Results Meeting (Check Schedule)

8:00 A.M. Period Prep Activities

W

F 07/14 General Ledger

Payroll Run

Stock Transfer Orders

MRP Run

Asset Management

12:00 P.M. 1:00 P.M.

LUNCH BREAK Goods Movement Outline Agreements

Wage Types Stock Overview

Shipments

Benefits

Invoicing

C-folders

Personnel Administration EBP Shopping Cart/Catalogs

Network Activities

SRM

Month End Closing Purchase Order Interfaces Billing Deliveries BW Bin-to-Bin Movements Infocubes WBS Scrapping Requisitions Elements Financial Reconciliations Month-End Closings Team Review—Discuss Defects CATS

Goods Issue

6:00 P.M.

Continue as Needed

EXHIBIT 12.2 Sample SAP Testing Calendar Depicting Execution of Test Cases over a Five-Day Period

formation for all in-scope test cases, a testing calendar can be constructed as seen in Exhibit 12.2 and subjected to peer reviews. As a rule of thumb, test cases that are most critical to the business; that span multiple SAP modules, containing high customizations; that require verification of data with legacy systems; and that are known to be unstable, high risk, or prone to defects, should be executed as early as possible or as soon as dependencies would allow to permit sufficient time to resolve defects. When estimating the time needed to execute manual test cases, it is important to recall that the execution time for the manual test case includes the time spent on manual keystrokes to run the test steps and test conditions and also the time needed to manually record all test

12_4782

2/5/07

276

11:19 AM

Page 276

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

results. When test cases are executed with automated test tools, the test tool typically generates an execution log that shows all test results, which test steps were completed successfully, and what processes/objects were verified. In SAP some manual test executions for a given end-to-end process or SAP transaction code need to be repeated multiple times to account for process variations and data variations. For instance, the SAP transaction code VA01 for creation of a sales order may need to be executed multiple times in order to account for different order types, such as repair, stock transfer, and domestic and international orders. The test schedule and/or test calendar may need to be adjusted to include all potential variations of a given test case. The test calendar can be created for multiweek or even multimonth test cycles. The test calendar can include the name of the tester assigned to execute each test case and show the time duration for each test case with a time grid, as shown in the vertical bar for the test calendar within Exhibit 12.2. Test cases that have execution times expanding over multiple days can be shown as continuation test cases for the following days. Another dimension or featured element of the test calendar is that test processes or modules to be tested can be shown in different shades, as seen in Exhibit 12.2. When developing the test calendar, create a timeline for all the different tests under each testing phase (i.e., unit and other development testing, scenario testing, integration testing, performance testing, user acceptance testing, etc.). The test calendar can be highlighted in different colors or altered daily in the event that the execution of test cases takes longer than planned, and subsequently this information can be recycled and serve as historical information for future system releases and testing cycles. Similarly, test cases that finish ahead of schedule or earlier than expected should also be highlighted and the information reused for future testing cycles. The test manager can update both the test calendar and the test schedule with the briefings provided at the daily testing meetings. The test calendar in Exhibit 12.2 shows that time is allocated every day prior to the start of testing to review test results from the previous day.

12_4782

2/5/07

11:19 AM

Page 277

Test Execution

277

TEST DEPENDENCIES When setting up the testing calendar, it is important that within the specific testing context testing dependencies are planned for or at least reviewed prior to the execution of the first test case. Chapter 7 discussed techniques and methods, such as test readiness review (TRR), which can be used as checklists for evaluating and assessing the readiness to initial the first test case execution. The following are some dependencies that should be examined prior to the start of the test case execution. Furthermore, the test execution calendar and schedule themselves have dependencies, since it is possible that a given test case cannot be started until another dependent test case has either completed or partially completed. ■







The test environment has to be ready. It is important that the testing environment is separate from the development environment in order to maintain an uncompromised test environment. Also consider the lead times to procure any hardware or software required for this independent type of testing effort. Test data. Test data has to be prepared for the testing efforts. For example, in order to test data transfer rules between interfaces, the test data has to be considered and needs to be prepared accordingly. Data from external systems may need to be preselected for testing interfaces. Test data may also need to be loaded into the SAP test environment with SAP-CATT scripts, automated test tools, or other mechanisms prior to the start of the test execution. Special attention needs to be given to test cases that consume SAP data and require unique data records or test cases that can cause data conflict because they attempt to process the same data record. Test case dependencies. In order to complete a test, a setup might be needed that requires the run of a test case. Test case dependencies have to be considered. For example, in order to run a specific financial report and analyze its output, a set of numerous test cases might have to be run in order to populate the required report data. End-to-end processes. Other testing dependencies require the test manager to consider testing of chains of SAP transactions that make up an end-to-end process that cuts across multiple modules

12_4782

2/5/07

11:19 AM

278



Page 278

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

such as order-to-cash, purchase-to-pay, and hire-to-retire with external data and converted data. Roles/profiles. In order to test roles and profiles, appropriate access controls have to be set up and prepared to allow for this type of testing.

Again, these dependencies demonstrate that multiple variables and parameters need to be evaluated prior to the start of the test execution cycle and that omitting a single variable may cause the testing cycle to come to an abrupt halt.

TEST METRICS After the test cases have been scheduled with dependencies and executed, the test team collects testing metrics. It is important to track the schedule and the test completeness via testing metrics. The execution of test cases is an activity that in and of itself generates multiple testing metrics that provide management with transparency into the testing progress and resolution of defects. Testing metrics provide information such as how many defects have been resolved, how many test cases have been executed, average time to resolve a defect, average duration time to execute a test case, how many test cases a particular tester has executed, number of requirements met as a result of executed test cases, and so on. Testing metrics also provide answers that assist in evaluating the exit, entrance, and release criteria. Most test management tools provide a comprehensive set of real-time reports, graphs, and charts automatically that facilitate and expedite the gathering of testing metrics. In the absence of a test management tool, spreadsheets and much manual effort can serve as the test completeness–tracking tool. In Exhibit 12.3 testing metrics are collected and gathered manually with a spreadsheet. A summary worksheet is developed that tracks the test execution percentage completion daily. Testing metrics are important because they help to address the question “is testing completed?” Everyone3 wants to know when test3

Adapted from Elfriede Dustin, 1999, Automated Software Testing, Reading, MA: Addison Wesley.

12_4782

2/5/07

11:19 AM

Page 279

279

Test Execution

EXHIBIT 12.3 Testing Metrics Collected with a Spreadsheet

ing is completed; therefore, it is imperative that test execution is tracked effectively. This is accomplished by collecting data or metrics that help identify the test progress, so that corrections can be taken in order to assure success. Additionally, using these metrics, the test team can predict the release date for the application. In the case of a release date being dictated, these metrics can be used to measure coverage. Progress metrics are collected iteratively during the various stages of the test execution cycle. Some sample progress metrics are outlined below. Testing metrics will vary widely across SAP installations; the system integrator providing SAP services, the client paying for SAP services, and the auditors for the SAP project can define and establish testing metrics to be collected, disseminated, distributed, and published based on the project’s needs, charter, and contractual obligations: Test Procedure Execution Status (%) =

Executed number of TP Total number of TP

This execution status measurement divides the number of test procedures already executed by the total number of test procedures planned. By reviewing this metric value, the test team can ascertain the number of remaining test procedures that need to be executed. This metric, by itself, does not provide an indication of the quality of the application. It only provides information about the depth and

12_4782

2/5/07

280

11:19 AM

Page 280

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

progress of the test effort without any indication of the success of the effort itself. It is important to measure the test procedure steps executed, not just the entire test procedure executed. For example, one test procedure might contain 25 steps. The tester successfully executed steps 1 through 23 and then encounters a showstopper at step 24. In this case, it is not beneficial to fail the entire test procedure; a more precise measurement of the progress, or number of steps executed, is more useful. Measuring test procedure execution status at the step level results in a highly granular progress metric. The best way to track test procedure execution is by developing a matrix that contains the identifier of the build-under-test, a list of all test procedure names, the tester assigned to each test procedure, and the percentage complete, updated daily and measured by the total number of test procedure steps versus test procedure steps executed successfully. Many test management or requirements management tools help automate this process. Defect Aging = Date defect was opened versus date defect was closed Another important metric in determining progress status might be the turnaround time for a defect to be corrected, also called defect aging. Using defect aging data, the test team can conduct trend analysis. For example, 100 defects may be recorded on a project. When documented past experience indicates that the development team can fix as many as 20 defects per day, the turnaround time for these problem reports may be only one workweek. The defect-aging statistic, in this case, would reflect an average of five days. When the defect aging measure equals 10 to 15 days, the slower response time by the developers to make corrections may impact the ability of the test team to meet scheduled deadlines. Note that defect-aging measurement is not always appropriate and needs to be modified, depending on the complexity of the specific fix that is being implemented, among other criteria. Defect aging is a high-level metric that verifies defects are being addressed in a timely manner. If developers don’t fix the defects in time, this can have a ripple effect. Testers will run into related defects in another area, now duplicating defects. One defect fix could prevent all subsequent defects from occurring. In addition, the older a defect becomes, the more dif-

12_4782

2/5/07

11:19 AM

Page 281

Test Execution

281

ficult it may be to correct it, since additional code may be built on top of it. Correcting the defect at this point may have much larger implications on the software than when it was originally discovered. Defect Fix Retest = Date defect was fixed/released in new build subtracted from date defect was entered or retested and failed. The defect fix retest metric provides a measure of whether the test team is retesting the corrections at an adequate rate. If defects that have been fixed are not retested adequately and efficiently, this can hold up progress, since the developer cannot be assured that their fix has not introduced a new defect, or has not been properly corrected. This last point is especially important: Code that is being developed with the assumption that the previous code has been fixed now will have to be reworked. If defects are not being retested quickly enough, the testing team has to be reminded of the importance of re-testing fixes, so developers can move forward, knowing their fix has passed the test. Defect Trend Analysis = Number total defects found versus testing life cycle Defect trend analysis can help determine the trend of defects found. Is the trend improving as the system-testing phase is nearing completion, or is the trend worsening? This metric is closely related to newly opened defects discussed below. The newly opened defects should decline as the system testing phase nears the end; otherwise, it might be an indicator of a severely flawed system. If the number of defects found is increasing with each subsequent test release, assuming no new functionality is being delivered, and the same code is being tested, only with code fixes, it could be indicative of numerous problems, such as: ■ ■ ■

Improper code fixes from development for previous defects. Incomplete testing coverage for each build; new testing coverage discovers new defects. Tests could be not executed until some of the defects were fixed, and then new defects are found once the previous defects are resolved and the tests can proceed to that point in the code:

12_4782

2/5/07

282

11:19 AM

Page 282

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Quality of Fixes = Previously working functionality and number of new errors introduced The value obtained from this calculation provides a measure of the quality of the software corrections implemented in response to software problem reports. This metric aids the test team in being able to determine the degree to which other, previously working, functionality has been adversely affected by software corrections. When this value is low, the test team needs to make the developers aware of the problem. This metric is also referred to as the recurrence ratio. It measures the percentage of fixes that fail to correct the defect, or, more specifically, the fix introduced a new error or broke previously working functionality. This ratio can be useful to measure the success of the unit and integration testing efforts. Defect Density = Total number of defects found / Executed number of TP per requirement The defect density metric is an average calculated by taking the total number of defects found in a specific functional area or requirement. For example, if there is a high defect density in a specific functional area, it is important to conduct a causal analysis using the following types of questions: ■ ■ ■

Is this functionality very complex, and therefore it is to be expected that the defect density is high? Is there a problem with the design/implementation of the functionality? Were inadequate resources assigned to the functionality, because an inaccurate risk had been assigned to it? It also could be inferred that the developer responsible for this specific functionality needs more training.

Additionally, when evaluating defect density, the priority of the defect will need to be considered. For example, one application requirement may have as many as 50 low-priority defects, while the acceptance criteria have been satisfied. Still, another requirement might

12_4782

2/5/07

11:19 AM

Page 283

Test Execution

283

have one open high-priority defect that prevents the acceptance criteria from being satisfied. These are just a few of the metrics that need to be gathered to measure test program execution—there are many more available. These are the core set of metrics to be tracked in order to allow for corrective activity, if necessary, to point out risk areas, and to allow successful execution of the test program. Again, testing metrics need to be defined that are suitable to the project’s needs prior to the start of the test execution.

TEST LOGS AND RESULTS In addition to testing metrics, test execution brings about test results that can show that the system either works as intended or fails to do so. Depending on the SAP environment and industry regulations, test results may need to be stored in a secured repository that offers version control and audit trail capabilities. Test results can be stored either manually with hard copies or electronically as in scanned images. The project’s policy will dictate the length of time needed to store, naming standards for test results, who signs the test results, how test results will be archived, and who has the ability and permission to modify and/or review the test results. Test results can include the following information: the test status for each test step, the screenshots used to show that the test step was executed successfully, the comments (if any) annotated after a test script was executed, log files or forms produced after a test case was executed, signatures provided to approve the test results, and automatically generated test logs from automated test tools. Most test management test tools will store test results from manual and automated test case execution including screenshot printouts. Test results can also be stored in spreadsheets, assuming that the spreadsheets are placed in a common repository such as a shared drive or Solution Manager.

12_4782

2/5/07

11:19 AM

Page 284

13_4782

2/5/07

11:21 AM

Page 285

CHAPTER

13

Management of Test Results and Defects fter test cases are executed, test results are reported and testing defects are resolved. Test cases consist of test logs generated automatically from the execution of automated test cases and results reported from manually executing each test step. A test result takes on the value of “pass” when the actual test results match the expected test results. Otherwise, the test result takes on the value of “fail.” A successful test result with a value of pass indicates that the execution of the test case has successfully fulfilled the requirement. Test results with a value of fail will require analysis and can become defects. Depending on its priority or category, a defect has the potential to delay or postpone a system release date. Defects are stored and managed in a secured system with database, reporting, and query capabilities. The Change Control Board (CCB) helps to manage, prioritize, assign, and categorize defects and subsequently determine how the defect will be resolved. The life cycle of a defect may include multiple states and handoffs among project members for initial testing, final validation, and approval before it is closed. Reports are generated for all stored defects. Defect reports are useful for supporting the exit criteria for each testing phase. Reports for defects can show the trend for defects for a given time segment, the number of outstanding defects, and the closure rate for the defects.

A

NEED FOR DOCUMENTING TEST RESULTS The formality for the reporting and management of test results is largely a function of the testing effort, project discipline, industry regulations, and available resources dedicated to the testing effort. 285

13_4782

2/5/07

286

11:21 AM

Page 286

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Entities implementing SAP (whether they are in the privately held, publicly traded, federal government, public, or not-for-profit sector) will have a variety of reasons for documenting SAP test results. The reasons for documenting test results may be either imposed on the entity implementing SAP due to regulations for Sarbanes-Oxley (SOX) compliance, part of a corporate approach and standard for implementing information technology such as the discipline offered by the Capability Maturity Model (CMM) from the Software Engineering Institute (SEI), or as a means of measuring and verifying the system integrator’s completion rates for the execution of test cases. The following are some reasons that companies and entities in different fields will need to document SAP test results: ■



A publicly traded company implementing SAP will need to establish a procedure for storing and managing test results to meet compliance with Section 404 from SOX. Section 404 contains five phases, including management testing, key controls, and audits. A pharmaceutical company implementing SAP will need to comply with good manufacturing practice (GMP) requirements set forth in the Quality System (QS) regulation that are promulgated under Section 520 of the Food, Drug, and Cosmetic (FD&C) Act. In short, according to the general principles of software validation of the Food and Drug Administration (FDA): The validation [of requirements] must be conducted in accordance with a documented protocol, and the validation results must also be documented. (See 21 CFR §820.70(i).) The test cases should be executed and the results should be recorded and evaluated to determine whether the results support a conclusion that the software is validated for its intended use.1



1

An entity requesting the SAP services from a systems integrator may ask that all test results from the integration test be documented as proof that the test cases were executed. Test results may show that a test case was executed successfully or that it failed and thus required the successful resolution of a defect to ensure that the requirement was implemented correctly.

www.fda.gov/cdrh/comp/guidance/938.html#_Toc517237968.

13_4782

2/5/07

11:21 AM

Page 287

Management of Test Results and Defects ■



287

A corporation or government agency adhering to the CMM at level 2 or higher as part of its documentation standards may document all test results. The Department of Defense as part of its DoD 5000 series acquisition regulations may impose that test results be captured, documented, and stored during the independent testing phases, as well as operational assessments for the implementation of an enterprise resource planning (ERP) system such as SAP.

Independent of the reason for documenting test results (whether it is mandated or optional), the most important reason to document test results is to trace system, functional, and technical requirements to the successful execution of test cases. Actual test results that are documented demonstrate either that a test case was successfully implemented since its execution obtained a pass status or that it fails when the actual results do not match the expected results documented on the test case. Test cases reach a successful state of “completed” when all test steps have been successfully completed with a pass status (since a test case may have multiple test steps) and when the appropriate stakeholders have approved and signed off the test results. Test cases that have a status of failure after test case execution demonstrate that the requirements were not implemented correctly, or that some other conditions prevented the tester from proving that the requirement was implemented correctly.

STORING TEST RESULTS AND METRICS GATHERING Test results provide a chronological record with audit trails of relevant details about the execution of tests. Test results are generated in two formats: (1) manual, whereby testers report test results for each individual test step; or (2) automated, whereby after an automated test case is executed it automatically produces a test log with all the test results. An automated test log will contain information such as start and end times for the test, results for verification points (i.e., verify that SAP produces a status bar message at a designated point in a process), and all the automated test steps that failed and passed. Management and storage of test results can take place either in a test management tool with query and reporting capabilities for both

13_4782

2/5/07

288

11:21 AM

Page 288

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

manual and automated test cases or within spreadsheets/text editors that are not tied together for manually executed test cases. The decision to utilize a test management tool to store and manage test results depends largely on the project’s budget, the learning curve for the project’s members, and the project’s audits and compliance regulations. A test management tool for collecting and storing test results and documenting defects includes the following benefits: ■ ■ ■ ■ ■ ■ ■ ■ ■ ■



Audit trails. Integrates with automated test tool. Slicing/dicing of data. Security log-on features. Scheduled reports. Version control. E-mail workflow based on defined business rules. Customizations. A single repository for all data. Greater transparency for audits and managerial reports, since it offers integrated data from a single location and both custom and canned reports. Historical data (i.e., how long on average it takes to resolve defects reported against functionality for a particular SAP module, how long on average it takes developer “X” to resolve a defect, or which requirements are most likely to cause defects when executed, etc.).

However, collecting test results on spreadsheets, text editors, and notepads offers the following benefits: ■ ■

Inexpensive, since spreadsheets and text editors are normally included with the standard software image. Reduced learning curve for project members.

Companies with reduced budgets and limited functional scope can test SAP manually with spreadsheets where test cases and test results are documented for the short run. However, over the long run as the project’s SAP functionality increases, more SAP modules and bolt-ons (i.e., Supplier Relationship Management) are added and test results are subjected to more third-party audits and the likelihood of

13_4782

2/5/07

11:21 AM

Page 289

Management of Test Results and Defects

289

successfully managing SAP test results and defects with disconnected spreadsheets decreases. When the number of test cases and scenarios increases as the project adds new requirements, functionality, enhancements, and patches that are subject to multiple rounds of testing during integration, and regression testing, the need to have a test management tool is magnified. Many large projects that attempt to collect test metrics and test results to meet audits cannot do so effectively or at all with a series of disconnected spreadsheets. Test results need to include information that shows that a test step was actually successfully executed or that it failed to execute. The test results for manually executed test steps are documented in the field known as “actual results.” Automated test cases may automatically update all test steps with the status of pass for a successfully executed automated test case but may not actually update the actual results field, which would cause the reviewer of an executed test case to review the automatically generated test log in order to verify the individual results for each test step. For instance, a test case that was automated may include the creation of an SAP project (transaction code CJ20N), and the project may have been successfully created in SAP, but to learn the actual result for this automated test case it will be necessary to open up the automatically created test log and review which project number the system actually created. Screenshots attached to test results can show whether a particular process was successfully executed or that it failed in the event that a defect is needed. Screenshot printouts can be used in SAP to show that a particular test step produced an expected outcome such as a status bar message (i.e., Sales Order XXX was created for transaction VA01), that a workflow object was triggered and routed correctly, that a financial report produced correct calculations, or that custom fields properly displayed and processed information on an SAP screen. Screenshot printouts can be attached to test results to demonstrate that the system correctly functions at both the front and back end of the system. Screenshot printouts are also useful when a defect is created to facilitate the development or configuration team members’ understanding of the error after the tester identifies a system error from the execution of a test step. Both test results and screenshot printouts can be archived and either stored electronically or printed and stored in a file cabinet after they have been completed and approved. Electronic test results can be

13_4782

2/5/07

11:21 AM

Page 290

290

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

obtained by scanning the test results and storing them in a secured electronic medium. For printed results stored in file cabinets, a designated person will need to administer the hard copy results and have access to the file cabinet to prevent test results from being fudged. After the test results have been accepted and approved, the test team can collect data and use it as input for metrics that assist management in determining test progress. Again, test management tools facilitate the collection of test data and generation of reports for which test cases have been executed, which test cases have been assigned, total number of test cases that have been reexecuted in the event of changes or defects, which test cases are behind or on schedule, and so on. Collection of metrics for test results can be compiled and provided to the appropriate stakeholders including senior management for large testing efforts. Collecting test data from disconnected spreadsheets and text editors may create logistical problems or prove impossible for the test team.

REPORTING TEST DEFECTS Whenever the actual test results are different from the expected test results documented within a test case, a defect is reported. The tester executing the test case that discovers the defect through testing the application submits the defect within the defect-tracking tool. After the defect is submitted, it is reviewed, assigned, and subsequently closed when it has been successfully resolved. For example, Exhibit 13.1 shows a typical process for resolving a defect, whereby a defect is identified, assigned, reviewed by the CCB, resolved and retested, and closed when the original submitter of the defect is satisfied with the successful resolution of the defect. The CCB is involved with the resolution of a defect during testing because SAP is an integrated solution and even the most benign system change as a result of a defect resolution can have cascading effects on other system components, affect the project’s scope, and consequently increase the project’s costs. The CCB reviews the validity and merits of the submitted defects based on the project’s requirements. The CCB may decide that resolving the defect is out of scope for the project’s existing SAP release, that the defect will be assigned to the appropriate team members for resolution, or to defer the defect for a future system release.

Closed

Reject

Duplicate?

Evaluate Review

Impact Analysis

Closed

Out

Scope?

Determine Impact

EXHIBIT 13.1 Typical Approach for Resolving a Defect

Routed

CCB

TL Assigns

Fix Problem

Flag as Ready for Test

Verify Resolution

Implement Solution

Team Member

Reject

Fails

Passes Approve

Test Results?

Verify Fix

Tester

11:21 AM

Close Defect

Submit Defect

Team Lead

2/5/07

Originator

Life cycle of a defect

13_4782 Page 291

291

13_4782

2/5/07

292

11:21 AM

Page 292

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

The rigors and discipline for reporting and tracking defects vary across testing efforts. For instance, during the unit-testing phase, defects may not be formally reported or documented and the defects may be resolved on the fly without aid and assistance from the CCB. However, during integration testing, all defects may be documented, reviewed, assigned, and closed with proper documentation including screenshot printouts and signatures from all affected stakeholders. Furthermore, resolutions to defects may not be transported to the next environment until all corresponding documentation for the defect has been validated. The main components and attributes of a defect include its severity, state, category, description, impact, and time and resource estimates for resolving it. Exhibit 13.2 shows the potential categories for assigning SAP defects in addition to proposed resolutions for each category. It is important to assign categories to SAP defects, since that can lead to the correct assignment of the defect from the get-go as opposed to assigning the defect to individuals who are not responsible for resolving the defect. Severity of a defect describes how important the defect is to the business, whether it is a “showstopper” or a minor defect that can be addressed at a future date. Ranking defects by priority or severity is important because it helps to assess whether the system is ready for go-live or can move from one testing effort to the next. For instance, a defect identified during integration testing that shows that an MRP (Material Requirements Planning) run cannot be carried out, that sales orders cannot be created consistently, or that month-end closing activities cannot be accomplished may delay the go-live for an SAP implementation and may show that the system cannot exit integration testing until these defects are resolved. In contrast, a defect that shows that time entry cannot be entered through the online timesheet may not fall under a high priority since it has a workaround, which is that employees can enter time on manual paper forms. Typically, defects that are critical to the business and operations and have no documented solution or workaround are the defects that deserve and warrant the most attention from the development and configuration teams, and whose resolution is critical to continue testing activities. Exhibit 13.3 displays severity levels for SAP defects and their impact on the testing activities. When a defect is submitted, a defect

13_4782

2/5/07

11:21 AM

Page 293

293

Management of Test Results and Defects

EXHIBITY 13.2 Categories and Recommended Resolution Methods for Reported SAP Defects Defect Category

Description of Defect and Proposed Resolution

Data Defect

The test case was documented and executed with invalid test data. The process or object to be tested cannot be tested until valid data can be identified. Resolution—Identify and remove the erroneous data value from the test case, replace it with a valid value, and reexecute the test.

RICEWF (ABAP) Coding Defect

The ABAP program, user-exit, or object is defective in that it does not meet the technical specifications or requirements documented and managed in the requirements tools. This type of defect is also triggered when the program fails to function, compile, or produces short dumps. The type of RICEWF objects include Reports, Interfaces, Conversions, Enhancements, Workflow, and Forms. Defects with batch scheduled jobs can also be reported under this category of defects. Resolution—Identify the programming or development defect if any RICEWF is not meeting documented requirements, or technical or functional specifications. Assign the defect to the development team for resolution and initial testing. The functional team reexecutes the test case for validation and approval.

Configuration Defect

The SAP configuration or configuration settings do not meet the functional requirements captured during the blueprint phase, or stored in the requirements management tool. Resolution—Report defect and assign for Change Control Board for further review. The CCB can assign the defect (if it is in scope) to the appropriate configuration team for resolution. The configuration team resolves and tests the defect, then assigns the defect to the test team or subject matter experts for final validation.

User Role Defect

The SAP test case cannot be executed because the role assigned for the execution of the test case does not have all the necessary authorizations based on documented security requirements and/or segregation of duties. Resolution—Report defect to the security team and identify correct role or modify the existing role to allow test case execution to resume. (Continues)

13_4782

2/5/07

11:21 AM

Page 294

294

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

EXHIBITY 13.2 (Continued) Defect Category

Description for Defect and Proposed Resolution

Vendor Software Defect

The SAP solution as delivered out of the box has deficiencies, errors, bugs, or defects not associated to the project’s custom or unique system settings. Resolution—Contact the in-house SAP representative or SAP support (if software maintenance agreement is in place) to troubleshoot and resolve problem. Defect resolution may require application of OSS notes, patches, hot packs, or system upgrade.

End User Error

This is a bogus defect. This category occurs when there is nothing wrong with the SAP system and/or documented requirements and the tester incorrectly logged because he does not have sufficient training in SAP, did not understand the test case or executed the test case incorrectly. Resolution—Provide further training for the tester executing the test case, or update the test case documentation so that test steps are clearly understood by a wider audience of test participants with different levels of knowledge in SAP.

Requirements Defect

This defect occurs when the application meets the documented requirement but there is a problem with the documented requirement. The requirement may have been ambiguous incomplete, inconsistent with company’s rules and policies, etc. Resolution—Turn the requirement to the Change Control Board (CCB) for further evaluation. The requirement may be deferred, waived, scrapped, or reworded under a controlled process for implementation. Update test cases and test data to provide coverage for the deficient or erroneous requirement and retest the system to verify the requirement.

severity is chosen and then refined later when the CCB and/or team leader reviews the defect. A tester may perceive that a defect is of the highest severity only to find out that a configuration team leader may have a workaround for the defect and thus its severity level is reduced. Assigning the severity level for a defect may involve multiple individuals from different teams. The severity of a defect determines how critical the defect is to the business and also how quick the turnaround time should be for re-

13_4782

2/5/07

11:21 AM

Page 295

295

Management of Test Results and Defects

EXHIBIT 13.3 Defect Severity Levels Severity

Criteria

Testing Impact

1

SAP Crashes or Lock Ups a. Critical SAP scenarios, features, or requirements cannot be performed b. A requirement of the highest level of importance cannot be performed.

Testing cannot proceed until defect is resolved

2.

System is operable but has technical problems (major) and no work-arounds exist a. Required system capability or functionality cannot be performed and no work-around solution is known/available b. Project’s cost, or schedule are at risk and no work-around solution is known/available

Continuing to test the system under this severity may jeopardize the testing schedule

3

Development of Configuration Problem (medium/major) where an acceptable work-around has been identified a. Required system capability or functionality cannot be carried out but a work-around solution has been identified b. Project’s cost, or schedule are at risk but a work-around solution is known/available

Little or no impact to the testing schedule

4

Development or Configuration Problem (minor) a. Results in end user but does not effect a requirement, system capability, service level agreement, or functionality b. Does not affect other testing tasks or execution of other test cases

No impact to testing activities/schedules

5

Minor—Problems that do not require system changes or are ‘nice to have’ but are not necessary to continue operations or meet documented requirements. Problems related to documentation

No impact to testing activities/schedules

13_4782

2/5/07

296

11:21 AM

Page 296

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

solving it. A system integrator implementing SAP for a customer may be told that defects with a severity level of one must be solved within the same business day that the defect was identified. Conversely, a client may expect that a system integrator resolve minor defects within a workweek. The client and the system integrator must reach a reasonable accord as to what the turnaround times will be for resolving defects and also document all assumptions made for estimating the turnaround times for resolving a defect. Exhibit 13.4 provides suggested turnaround times for resolving defects based on assigned severity levels. During its life cycle, a defect may enter several states that allow the project team to track and monitor its status. A defect management tool is recommended for tracking and monitoring the status of a defect that includes e-mail work flow functionality. For instance, when the status of a defect is changed from “assigned” to “ready for retest,” an automatic e-mail is sent to the test team indicating that the defect can be retested in the development or test environment before it is ready for transport. Exhibit 13.5 provides several defect states and definitions for each state. States of a defect can be customized or changed based on project’s preferences and standards. Also, a project may decide who is responsible for changing the status of a defect. For example, a project may specify that only a client representative can close out a defect that the system integrator resolved. EXHIBIT 13.4 Proposed Turnaround Times to Resolve a Defect Suggested Defect Resolution Turnaround Times Severity

Resolved within

1

a. As soon as possible (System down emergency) b. Within 1 business day

2

a. As soon as possible or b. Within 1–3 business days

3

a. Within 4 business days

4

a. Within 1–2 work weeks

5

a. If within scope resolve at any time within existing release or future releases

Description The tester executed a test case and reported the defect for resolution and it is awaiting Team Lead review. The defects in this state are being evaluated by the team lead for assignment into one of the following states: Open-Unassigned, Closed-Duplicate, Closed-Invalid, Open-Assigned. The defects in this state are recognized as valid; however, the team lead has not assigned the defect to a team member for resolution or the team lead cannot propose a current resolution for the defect. The defects in this state are recognized as valid; however, the team lead has not assigned the defect to a team member for resolution because the impact to the system from the resolution of the defect is being analyzed by the change control board. The defects in this state have been assigned to a team member for resolution and the team member is currently working on a solution for the problem. The defects in this state have been reviewed by team lead but closed because there is already another reported identical defect. The defects in this state have been reviewed by team lead but closed because either the tester made a mistake in executing the test case or the resolution for the defect is out-of-scope. The defects in this state have had a successfully implemented resolution (fix). The defects in this state encountered failures that indicate the fix was not successfully implemented. The defect resides in this state to indicate that a fix for the defect is ready to be retested. The defects in this state are ready for promotion to the final SAP client (i.e., Production, QA). The defects in this state have been successfully tested and all corresponding defect documentation and fields have been updated. Only the defect submitter can complete the defect.

Submitted

Review-Team Lead

Open-Unassigned

Open-Impact Assessment (CCB)

Open-Assigned

Closed-Duplicate

Closed-Invalid

Retested (Passed)

Retested (Fail)

Ready for retesting

Ready for transport

Complete (Closed)

2/5/07

Defect State

EXHIBIT 13.5 Possible States of a Defect

13_4782 11:21 AM Page 297

297

13_4782

2/5/07

298

11:21 AM

Page 298

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Defects that have different statuses, severity levels, and categories can be extracted to reports for presentation to the project’s senior managers. Test management tools offer capabilities for generating reports, graphs, and charts for reported defects. Test management tools offer a single repository for reporting defects. Reports include defect trends such as whether the number of defects are increasing or decreasing on a weekly basis. Other reports include on average how long it takes to resolve defects with severity level one, two, and so on, how many defects remain open, and how long they have been open. The ability to query a database of defects and generate reports allows test and project managers alike to make informed decisions for supporting the exit criteria for a testing effort or assess the readiness for a go-live. For instance, the exit criteria for integration testing can impose that no defects with severity levels of one or two remain open, in addition to showing a decreasing trend of defects on a weekly basis in order to exit integration testing. The recommended fields to be populated before a defect is completed and its associated object (which triggered the creation of the defect) is transported into production or another target environment are included below: ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■

Results from impact analysis (i.e., include description for affected processes, areas). Level of effort—hours needed to resolve defect. Originator’s name. Description including observed output (i.e., messages, system responses, test results, dumps, etc.). Screen captures. Category. Priority. Corresponding test case. Affected SAP area or module. Workarounds (if any). Release notes (if any).

After the suggested fields above are populated, a defect can be closed or completed.

14_4782

2/5/07

11:27 AM

Page 299

CHAPTER

14

Testing in an SAP Production Environment he axiom of a production-based SAP system is upgrades and system changes. In the human body a medication that cures some symptoms may cause unintended side effects. In the software world, particularly in the world of SAP with the promise of data and process integration among modules, a single change to an area of the system may cause “side effects” on previously working system functionality. Many factors trigger changes and upgrades in a live SAP system (i.e., applying OSS [On-line Service System] notes, graphical user interface [GUI] upgrades, etc.). These system changes must be thoroughly tested to avoid adverse cascading effects within the SAP landscape. Introducing vital system changes to a live SAP system is compromised when a project relies on manual testing and does not have an automated testing framework. Commercial automated tools provide a bona fide solution to support and facilitate upgrades and changes to an SAP production environment. Automated test tools facilitate and expedite the execution of test cases for regression testing. Despite the promise of implementing an effective automation framework with commercial test tools as a means to support testing of SAP production changes, many SAP projects struggle to do so. Regression and performance testing with automated test tools can lead to the creation of a library of test SAP scripts that will help maximize investment in test tools while increasing the dependability of mission-critical business processes running in the SAP production system.

T

299

14_4782

2/5/07

300

11:27 AM

Page 300

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

PRODUCTION SUPPORT BACKGROUND In an SAP production environment system changes can be planned, emergency (ad hoc), or part of a major upgrade (i.e., GUI upgrade). Planned system changes for improving the system such as enhancements, routine maintenance, and patches tend to drive the bulk of all system changes. Planned system changes are transported into production as part of a scheduled release. Improvement changes can be either optional or mandatory and are generally customer driven. Planned changes are considered customer driven because often they deal with a system improvement or enhancement that the SAP production support team implemented. Emergency changes that are considered “showstoppers” can be transported into the production environment on an ad-hoc basis, or as soon as a resolution is identified and thoroughly tested. Generally, emergency changes come from help desk tickets that end users report when they are unable to carry out a critical business process. Examples of emergency changes include inability to make a run payroll, system is inoperable and users cannot log on, segregation-of-duties violations where bogus vendors can be set up, and incorrect logic in quota arrangements, which causes the company to incur financial losses. Emergency changes are normally considered mandatory since they have no workarounds and they can cause disruption to the business if their successful resolution is not identified and transported. Emergency changes can be customer driven or vendor driven. They are customer driven because the customer may find an in-house solution to the problem through the production support team. Changes are vendor driven when the solution may come only from the vendor (i.e., OSS note). Major system releases and system upgrades such as upgrading the GUI, implementing a new industry-specific solution, applying vendor-released hot packs, or adding a new SAP module may require feasibility studies, gap analyses, workshops, and comprehensive testing before their implementation is considered. System upgrades of this nature are primarily vendor driven and may become necessary or even mandatory in order to prevent the system from becoming obsolete or to keep the existing vendor maintenance agreement current. Exhibit 14.1 illustrates the categories for production changes.

14_4782

2/5/07

11:27 AM

Page 301

301

Testing in an SAP Production Environment

Customer Driven (Optional)

Vendor Driven (Mandatory/Optional)

Customer Driven (Mandatory)

Planned Changes

Ad-hoc Changes

Emergency Changes

End user calls help desk

Hot Packs

Production system is down

Patches

New Industry Solution

Design violates company policy

Gap analysis

Prior version discontinued

Company merger

Deferred scope

OSS notes

System design impacts bottom line

EXHIBIT 14.1 Categories of SAP System Changes The one constant and immutable heuristic associated with system changes is that they are subject to testing. The testing can be at any one of the following levels: unit, string, integration, regression, smoke, security, and performance. The amount of testing necessary to verify the system change depends on the event/trigger that causes the system changes. Production SAP systems are susceptible to changes and upgrades from the following events: ■ ■ ■

■ ■ ■ ■ ■

Addition of SAP modules and/or SAP bolt-ons. Application of kernel upgrades or ABAP hot packs. SAP upgrades affecting custom configuration, out-of-box functionality, and report, interface, conversion, and enhancement (RICE) objects. End user requests enhancements. End user reports problems to the SAP help desk or production support team. Gap analysis reveals needed functionality for a future release. Scope from one release is deferred to a future release. New division/unit within the same company requests SAP.

14_4782

2/5/07

302 ■ ■ ■ ■ ■

11:27 AM

Page 302

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Exceptions and waivers from prior SAP releases roll over to the production team. Application of OSS notes and patches. A company with an existing SAP environment buys another company that needs to have SAP implemented. Support for older versions is discontinued, which causes the project to upgrade. Hardware, database, or network upgrades.

It is inevitable that even the most static, generic, or out-of-thebox SAP production environment during its lifetime will undergo at least one of the aforementioned events. Exhibit 14.2 highlights a typical SAP production change (assuming the Change Control Board [CCB] has accepted the change) whereby an end user reports a problem to the help desk, the SAP production team resolves the issue, and the test team executes automated test scripts to resolve the problem. Production changes and upgrades vary in degree of complexity. Some changes are as simple as adding a new value to a drop-down list. In contrast, other changes affect system functionality across multiple SAP modules, which can cause far-reaching consequences to the company’s bottom line. From a testing perspective, it is the latter production changes that can consume the most time and resources for projects when testing is conducted manually. When production changes that cause cascading effects are not thoroughly tested, the business is susceptible to a higher risk of failure. Fortunately, risks that system changes pose to the production environment for most SAP projects can be mitigated with robust regression and performance testing that is supplemented with automated test tools. The first lines of defense against expected production changes are preparation and planning followed with system development, implementation, and testing.

CHALLENGES TO PRODUCTION SUPPORT TESTING Some of the main challenges to production testing include: ■ ■

Complexity and frequency of system changes Having dedicated resources to test system changes

Reviews ticket and assigns to Team Lead

Calls help desk to report problems in Production

Y

Resolve issues

Communicates impact to Test Team

Retest in QA

P

Findings?

Ready for “prod”

Implemented “fix” didn’t break anything

F

Report findings

Execute “sunny-day” scenarios

Selects scripts to run from regression library

Automates current “fixed” process

Test Team

11:27 AM

Promote into QA environment

N

Resolved?

Test in Dev

Apply fix

Functional Team

2/5/07

EXHIBIT 14.2 Testing a Hypothetical SAP Production Change

Production Team

End User

14_4782 Page 303

303

14_4782

2/5/07

304 ■ ■ ■

11:27 AM

Page 304

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Heavy reliance on manual testing Rigorous testing causing schedules to slip Labor cost of production testing

The sheer magnitude, frequency, and volume of SAP changes in a production environment can create a series of logistical issues for the production team. The production, integration, and test teams need to address the issues of who will do the testing and how system changes will be coordinated, documented, analyzed, reviewed, and tested. The system changes vary in degree of complexity but even a minor change to a single SAP transaction can have a rippling effect on the system’s integrated processes. Testing in a production environment includes identifying the affected processes that need to be tested as a result of the system change. For example, the application of an enhancement to the SAP transaction CJ20N can have cascading effects on an integrated SAP process containing touch points within an end-to-end process such as order-to-close. A process such as order-to-close can contain multiple strung-together SAP transactions, data values, and process variations that can take several individuals hours or days to test and document results after all test scenarios are identified from the implementation of a system change. Software vendor Compuware offers the solution SAP Assessor Tool to identify the impact of a system change within an SAP environment. Furthermore, SAP transaction code SE51 provides a “where-used” function to identify where a program is used within SAP transactions. Projects without automated test tools or a robust automation strategy will need to rely heavily on manual testing for regression testing of the system along with manual documentation or recording of the test results. Manual testing is not easily repeatable, takes functional resources from their primary job responsibilities, is tedious to document, and is time consuming. Manual testing for end-to-end complex processes such as purchase-to-pay, hire-to-retire, and forecast-to-order requires the coordination of multiple individuals where each individual may be familiar with only a portion of the end-to-end process. Testing of end-to-end processes can be time consuming and thus cause schedule slippages. Given these constraints and limitations of manual testing, many projects suspend or delay their plans to apply a system change. Manual testing also proves to be expensive when the same processes need to be frequently retested with multiple individuals.

14_4782

2/5/07

11:27 AM

Page 305

Testing in an SAP Production Environment

305

Test automation allows projects to easily retest the same processes with multiple sets of data while verifying system attributes and configuration settings. Automation can also run unattended or without human intervention, which frees up functional resources. Furthermore, test tools are capable of producing test results and test logs with audit trails that facilitate information technology (IT) audits. Depending on the industry where SAP is implemented, regulations and company policies may dictate that screenshots be produced to verify that system changes were implemented correctly and test tools facilitate the process of capturing and storing screenshots.

AUTOMATION TESTING THROUGH SUNNY-DAY SCENARIOS Although it is possible to maintain and support SAP in the production environment in the absence of third-party tools, experience shows that SAP manual testing is prone to error, strains the resources from the configuration team, requires much coordination, and is expensive and time consuming for end-to-end processes that have many variations. Production testing in which manual testing would be impractical and would require an army of manual testers means processes that are subject to SAP variant configurations that have hundreds of possible combinations for creating a finished product. For instance, in the automotive industry it is possible to build or configure a vehicle for purchase over a website with hundreds of combinations, and after a production change all combinations for configuring a car must work correctly. In this example, the only feasible or cost-effective way to test all vehicle combinations would be with automated test tools. In Appendix B, techniques predicated on Taguchi’s design of experimentation are described under the software-testing principle of orthogonal arrays. Orthogonal arrays can reduce the number of test cases while still providing maximum system coverage. Orthogonal arrays are suitable for projects with SAP variant configuration. Test tools as described in Chapter 5 offer a viable alternative for supporting the production system. Test tools allow for the parallel execution of several end-to-end scenarios spanning multiple SAP transactions that would otherwise prove to be unwieldy or too resource intensive through manual testing efforts. End-to-end scenarios are processes that include the stringing together of several SAP

14_4782

2/5/07

11:27 AM

Page 306

306

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

transactions that can contain verification for system functionality, SAP roles, work flow, interfaces, reports, and performance. Companies that own test tools can build a library of automated test scripts for sunny day scenarios that can be executed and repeated on a regular basis to ensure that production transports have not adversely affected previously working system functionality. Sunny-day scenarios are a representation of a business process with error/fail free system behavior. They are primarily designed to verify frequently executed SAP processes within a single module, containing touch points and critical system functionality. Testing of sunny-day scenarios includes testing of process variations, reversals, adjustments, and cancellations. Exhibit 14.3 shows an example of the end-to-end scenario requirement to invoice, consisting of five variations that can be automated and scheduled to run at a predefined interval before changes are promoted into the production environment. Rainy-day scenarios are in contrast to sunny-day scenarios in that rainy-day scenarios take into account possible system exception and error cases. For companies that do not have the resources or in-house expertise to build a library of test scripts, they can acquire from third-party vendors a library of pretest SAP test scripts that can be customized to meet the internal business processes. Appendix A delves into the concept of commercially available SAP test libraries and SAP accelerators. A starting point for automating sunny-day scenarios would be the SAP implementation tool Solution Manager, which under the activity for Define Baseline Test Cases offers an accelerator containing a list of predefined test scenarios in addition to the scenarios that can be

EXHIBIT 14.3 End-to-End SAP Scenario Containing Multiple Variations Variations Business Scenarios

Variation

Requirement to Invoice Requirement to Invoice Requirement to Invoice Requirement to Invoice Requirement to Invoice Requirement to Invoice

Service PO Nonstock PO Service OA Material OA Consignment OA Stock PO

14_4782

2/5/07

11:27 AM

Page 307

Testing in an SAP Production Environment

307

launched from the SAP support portal. Alternatively, companies that have built functional teams around production processes such as order-to-cash, purchase-to-pay, work-to-pay, hire-to-retire, and so on can build automated test scripts around their established functional teams. For companies that built functional teams around standalone SAP modules (e.g., Human Resources, Project Systems, Materials Management, etc.) without the Solution Manager implementation tool, sunny-day scenarios can be identified through workshops and seminars with an audience of stakeholders for the entire end-to-end process. After initial sunny-day scenarios have been identified based on predefined criteria and proven to successfully work manually in a test environment, the test team can either develop the necessary modular and reusable test scripts from scratch or tweak the generic test scripts if purchased from a commercial test tool vendor. The processes should be proven to work manually in order to avoid wasted automation efforts on an unstable SAP environment. The project will need to rely on dedicated in-house or outsourced experienced resources to develop and maintain the necessary automated test scripts. Chapter 5 discusses the suggested skill sets for a test tool automator and rules of thumb (heuristics) for designing test scripts. Initially, test sunny-day scenarios can be placed in a repository or test management tool and subjected to version control. The first step to building a scenario is to record standalone SAP transaction codes, which are the building blocks of the scenario, and then string together various test SAP transactions (building blocks) to form a much larger SAP process. The standalone test SAP transactions can be recycled or tweaked to form other end-to-end scenarios. For example, SAP t-code VA01, which is for creating sales order, can be tested as part of the order-to-cash scenario, but through modifications the recording of VA01 can be reused in the order-to-close scenario. After the initial sunny-day scenarios are scripted, follow-on scripting can include the variations for the end-to-end scenarios assuming that the scenario variations are stable and proven to work manually. New processes to be automated should be documented with test cases and tested manually in both the DEV and the TEST environments. The recording of end-to-end scenarios and corresponding variations can lead to the creation of a comprehensive library of test scripts representing business-critical functionality.

14_4782

2/5/07

11:27 AM

Page 308

308

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

The following is an example of a potential application of sunnyday scenarios. The SAP production team implements and manually tests a configuration change. The configuration change will be included as part of a scheduled production transport. When the CCB analyzed the configuration change, a determination was made that the change would impact other SAP processes within end-to-end scenarios. The test team is informed of the configuration change and proceeds to execute automated end-to-end processes that ensure that the change does not adversely affect business-critical processes before the change is promoted into production. The test team ensures that touch points, business rules, functional requirements, segregation of duties, and work flow are not impacted with the new system change. The test team tests end-to-end processes with variations and multiple sets of data, which otherwise would have been too expensive or time consuming with manual testing.

APPROVALS FOR CHANGES Approving production changes requires a series of handoffs and approvals from various business stakeholders, including the end user. Many projects rely on a series of disconnected spreadsheets, phone calls, and e-mail messages to track the actions performed on a production change and the person signing off on the change. Visibility and transparency is hindered when companies do not track approvals and action for changes within a single version-controlled repository. For companies that are subject to IT audits, Sarbanes-Oxley, or regulations (such as pharmaceuticals), it is essential to thoroughly document all actions performed on system changes in a single repository with audit trails and reporting capabilities. Companies such as Mercury Interactive offer tools for handling the necessary handoffs to transport objects into production after system changes. A tool that can reduce the time needed to transport SAP objects from Mercury Interactive is Mercury Deployment Management Extension™ for SAP® Solutions. Expected stakeholder teams for approving a production change include the test, integration, change management, development, Basis, functional, and end users (client). The test team verifies the

14_4782

2/5/07

11:27 AM

Page 309

Testing in an SAP Production Environment

309

functionality implemented change and ensures that the change does not impact existing functionality through the execution of sunny-day scenarios. The test team can also document the test results and capture screenshots from the system to ensure that it was implemented, designed, and configured properly. The change management team may update the training materials, business process procedures (BPPs), and release notes depending on the complexity and type of system change. The functional and development teams implement the change, test the change manually, and help draft or refine test cases for the system change. Furthermore, for proposed system changes that have been successfully tested, the configuration and development teams will need to update the corresponding documentation, such as flow process diagrams and technical and functional specifications that are associated with the changed objects. The Basis team transports the change after the proposed change and its testing have been approved by all necessary stakeholders. The integration team ensures that all approvals have been granted for the system change, and that the change is transported under one of these situations: planned, emergency, or ad hoc. End users are critical stakeholders in certifying the test results since the system enhancement or change is implemented as a means of helping them achieve their everyday tasks. End users should verify that changes that originated from help desk tickets that end users reported are in fact resolved successfully. Furthermore, end users may need training for system changes that alter the layout and appearance of screens, business logic, creating new roles, integration touch points, and work flow. Within some organizations only the end user is permitted to close tickets reported through the production help desk.

TYPES OF PRODUCTION TESTS Production changes require much regression testing for impact. However, often misunderstood are which aspects of regression testing should be considered. For instance, custom objects and embedded security can be adversely impacted by a system with the implementation of hot pack,

14_4782

2/5/07

310

11:27 AM

Page 310

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

OSS notes, or configuration change. However, a system upgrade, new interfaces, new batch jobs, or the addition of a new module can impact the system performance and cause unnecessary system bottlenecks and degradation points that can render the system inoperable. Another misunderstood concept in production testing is that projects will test system changes only at the SAP GUI level and overlook the system behavior at the back end. For instance, if new fields are added to a screen from the SAP bolt-on Supplier Relationship Management (SRM), it may be necessary to test that the application correctly displays and populates the fields at the GUI level and that the fields are correctly populated and inserted in the system database. Typically, companies will develop automated sunny-day scenarios that verify the attributes, properties, and characteristics at the GUI level but not that the system is behaving correctly at the back end, which introduces a risk to the business. Test scripts can be enhanced to address this deficiency and include programming logic to verify the database. The teams conducting production testing should verify at a minimum that the system security, performance, functionality, workflow, business logic, and enhancements (user-exits) are not compromised when a system change is introduced from both the front end and back end. In addition to verifying system functionality, automated test scripts should include log-on for test users based on their roles in testing the system security. The scripts should also have logic for sending and verifying system notifications. Depending on the system change, the project may need to develop the same sunny-day scenarios in both the functional testing tool and the load testing tool to ensure that service-level agreements (SLAs) and optimal system performance are maintained when system changes are introduced. The changes and rippling effects are tested with the functional test tool. Exhibit 14.4 depicts the various types of tests that may be conducted depending on the type of SAP system change that is implemented. After the functionality has been verified, the system performance is tested. The rationale behind this is that the system functionality must be stable before a performance test is attempted. Systematic and robust regression testing for security, workflow, performance, and functionality at the back end and front end will reduce the risk of system failure as a result of a system change.

Functional

Security Workflow (Optional) Front/Back Ends

Functional

Security Workflow (Optional) Front/Back Ends Performance

Functional

Security

Workflow (Optional)

Front/Back Ends

Performance (Optional)

Development (Optional)

Functional

Security

Workflow (Optional)

Front/Back Ends

Performance (Optional)

Development (Optional)

Performance

Front/Back Ends

Workflow (Optional)

Security

Functional

Config Changes

EXHIBIT 14.4 Types of Tests Conducted Based on the Type of SAP System Change

Performance

Front/Back Ends

Workflow (Optional)

Security

Functional

New Module

Development

Performance

Front/Back Ends (Optional)

Workflow (Optional)

Security

Functional

Added RICE

Development (Optional)

Performance (Optional)

Front/Back Ends (Optional)

Workflow (Optional)

Security

Functional

Deferred Scope

11:27 AM

Performance

Hot Packs

GUI upgrade

OSS notes

2/5/07

Patches

14_4782 Page 311

311

14_4782

2/5/07

312

11:27 AM

Page 312

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

SUPPORTING THE TESTING EFFORT Production testing requires support, assistance, and coordination from multiple entities. For example, the following outputs may occur from a system change: ■ ■ ■ ■ ■ ■

Updating of BPPs End-user training Documentation of new test cases Development of automated test scripts and new system settings Development of new RICE objects Updates to functional specifications, flow process diagrams, and requirements

Managing all these outputs requires resources from the configuration, development, change management, test, and integration teams. For companies without dedicated test teams, the resources from the functional and development teams have to play dual roles, which prevent them from focusing on their primary job responsibilities. Without a dedicated test team, maintaining and developing test scripts can become an intractable challenge. In Exhibit 14.5 the anticipated support team roles are identified to introduce, test, and sign off a system change. Having a test team does not indicate or manifest that the test team can effectively provide testing for production changes in isolation. A test team will need to interface with stakeholders from various teams in order to test the application, retest the application in the event that the change was implemented incorrectly, and document test results. The interaction between the test team and the configuration team increases when the testing activities (the test automation activities, in particular) have been outsourced. In an outsourced agreement, the test team typically focuses on test script automation where the expertise resides with test tools and not necessarily on the SAP business processes or logic. The outsourced test team may not have the necessary domain expertise or SAP knowledge to test system changes without having fully documented test cases from the configuration team. The outsourced team may, however, automate test scripts successfully through well-documented test cases that contain expected results, and with the ongoing assistance from the functional SAP expert.

Manually Test Initial Change

Monitor System for SLAs

Schedule Releases

Provide Training

Fix Problems

Implement Change

Help Evaluate Automation Criteria Update Specs

Update Diagrams

Automate Processes

Execute SunnyDay Scenarios

Develop Performance Scripts

Report Results

Retest

EXHIBIT 14.5 Supporting Roles to Review, Implement, Close, and Approve a System Change

Manage Test Tools

Verify SLAs

Maintain Library of Scripts

11:27 AM

Update Specs

Test Change

Implement Change

Refine/Add Roles

Enforce Standards

Create Release Notes

Update/Create Test Cases

Participate in CCB

2/5/07

Requirements Traceability

Sign-on Change

Fix Problems

Transport Change

Chair CCB

End Users

Update/Create BPPs

Development

Manually Test Initial Change

Basis

Requirements Traceability

Integration

Change Management

Functional

Test

(In-house/Outsource)

14_4782 Page 313

313

14_4782

2/5/07

314

11:27 AM

Page 314

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Projects with dedicated test teams (whether the test teams are inhouse or outsourced) can also follow consistent testing practices, which include documenting test cases on approved testing templates, reporting test results with screenshots, and supporting test tools. Test teams can also participate in the CCB meetings and provide an independent verification of the system from the person who implemented the change. In Federal Drug Administration (FDA) and other regulated environments, test teams can focus on documenting test results with screenshot printouts for the successful implementation of the system change and subsequently generate reports containing test metrics for the executed test cases.

TECHNIQUES FOR EXECUTING AUTOMATED SCRIPTS Automated test tools increase the testing coverage and reduce the turnaround time needed to introduce a change or a fix into the production system. It is possible that after an automated test case has been constructed and designed in an automated test tool, the time needed to execute the automated test case is only a fraction of the time needed to execute the test case manually, which expedites or reduces the test execution phase for changes introduced into a production environment. Automated test cases also provide greater testing coverage since they can be executed unattended or for multiple variations of a given scenario (i.e., hire-to-retire scenario). Automated test cases also create automatic test logs and test results after they have been executed, which saves time over manually recording test results. Automated test cases cannot replace all manual testing for regression testing. New changes or system fixes introduced into a production environment first need to be manually unit- and stringtested in a development environment. The newly introduced change or system fix is further tested manually as part of an end-to-end process or larger scenario to ensure that the system change or fix behaves as expected and conforms to documented requirements. However, before the system change is promoted into the production environment, it is necessary to test that the change does not adversely affect other system functionality. Automated test cases are an effective technique for verifying the potentially affected system functionality. Attempting to

14_4782

2/5/07

11:27 AM

Page 315

315

Testing in an SAP Production Environment

test manually all affected or impacted components may prove unlikely for projects that cannot devote resources for full-blown regression testing and recording of test results. Automated test cases increase the confidence that vital or businesscritical system functionality still works as expected after a proposed system change is introduced into the development and test clients and prior to its transport into the production environment. Exhibit 14.6 shows the various levels leading to a regression test. The newly proposed system change is first tested manually at the unit, string, and integration level and subsequently regression tested since the impact of the proposed change is tested with automated test tools. In the absence of automated test tools, the production team or SAP consultants would have to test all potential impact scenarios manually, which may not be possible given project constraints or availability of resources. From Exhibit 14.6 one can see that automated test cases increase testing coverage and increase the likelihood of verifying the impact of the proposed system changes on various business-critical scenarios. Production teams should consider the following techniques before executing automated test scripts when evaluating implemented production changes: ■ ■ ■ ■

Sequence for executing test scripts Identifying which test scripts need to be executed Maintaining test script data and data seeding Prioritizing test scripts

T-code

Manual Unit Testing

T-code 1

T-code 2

T-code 3

Scenario 1

Scenario 2

Scenario 3

Manual String/integration Testing Regression Testing (automated)

EXHIBIT 14.6 Hierarchy of Regression Testing for Changes Introduced to the Production Environment

14_4782

2/5/07

316 ■ ■ ■ ■ ■ ■ ■ ■

11:27 AM

Page 316

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Workstations (hardware) to execute test scripts Allocating a dedicated SAP environment for test script playback Assigning tasks for test script execution or running test scripts unattended Announcements (communicating) to the project execution schedule Holding scheduled sessions to report results from executed test scripts Capturing and storing test results (including screenshot printouts) Resolving test script errors Signoffs

Automated test scripts require maintenance, coordination, and analysis. The promise of libraries of test scripts with unattended (without human intervention) playback is difficult to attain without appropriate support and automation standards. Ideally, entire libraries of regression test scripts can verify system functionality, response times, and SLAs with little or no intervention from human beings. However, this goal is hampered when the following occurs: ■ ■ ■ ■ ■ ■

Test scripts are not scheduled to execute in the right sequence. Test data conflicts exist. There is confusion over roles and responsibilities. Test results are not captured or saved. Signoffs and approvals are ignored. Test scripts are not selected to verify cascading effects from the implemented change.

When the CCB meets to review future production changes, a decision is made whether the future change should be adopted, rejected, or put on hold. According to Karl Wiegers, author of books on software requirements, evaluation of the considerations below can help address the impact of a system change1: ■

1

Identify the other system components you’ll likely have to change. These might include other requirements, design descriptions, code, tests, user publications, help screens, system docu-

Karl E. Wiegers, “Requirements When the Field Isn’t Green,” STQE, May/June 2001.

14_4782

2/5/07

11:27 AM

Page 317

Testing in an SAP Production Environment







317

mentation, project plans, shared libraries, hardware, and even other subsystems or applications. Judge whether the change is technically feasible and can be accomplished at acceptable cost and risk. Will it conflict with other functions or overtax system resources such as processor capacity, memory, or communications bandwidth? Evaluate the possible impact on the system’s performance, response time, efficiency, reliability, usability, integrity, and other critical quality attributes. Estimate the amount of work effort involved.

Furthermore, before a production change is accepted it should be evaluated for cascading system effects and priority. The test team in collaboration with the configuration team analyzes the impact of the change in the system and which automated processes need to be executed in a predefined sequence in order to verify existing system functionality. The test team assigns the tasks associated with executing and collecting test results to individuals with sufficient technical expertise in the automated test tools. The assigned test team members verifying the system change should ensure that the test scripts play back successfully, have valid and sufficient test data, and that other systems users do not interfere with the execution of the test script. After test scripts are executed, some of the standards that can help facilitate compliance with IT audits include capturing test logs, test results, and screenshot printouts. The test results are subject to peer review and signoffs, which serve as part of the criteria and approval process for transporting the system change into production. With the aforementioned suggested guidelines and standards the likelihood of script playback and repeatability increases, which helps meet the challenge of timely promotion of SAP objects in an SAP environment.

14_4782

2/5/07

11:27 AM

Page 318

15_4782

2/5/07

11:28 AM

Page 319

CHAPTER

15*

Outsourcing the SAP Testing Effort OUTSOURCING DEFINED Outsourcing is a term that describes the practice of seeking resources outside of an organization to provide a service. The goal of outsourcing is usually to save money and/or to leverage a service provider that can do the job more efficiently or effectively than the internal staff. A common example of outsourcing in the information technology (IT) world is application development. However, complete business functions such as human resources, customer service, software testing, software development, and call centers may also be outsourced to another party. As the practice of contracting service providers outside of North America has become prevalent, many people confuse the terms outsourcing and offshoring. In truth, offshoring, or contracting with a service provider overseas, is but one of various means to contract outsourced services.

WHY OUTSOURCE SAP TESTING? Dynamic organizations encounter many demands and shifting priorities of their internal teams. The need for outsourcing of any type can change as business circumstances change. It may not be a resource gap or a deficiency in the internal team that leads to outsourcing. Rather, in a mature organization, it is likely a business decision, driven by business value that leads to an outsourcing solution.

*

This chapter was authored by Lorrie Collins, National Solutions Director for Spherion Corporation.

319

15_4782

2/5/07

11:28 AM

Page 320

320

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Limited Resources The project team that drives an enterprise initiative like an SAP implementation typically begins to morph into an enterprise of its own— consuming all subject matter experts (SMEs) from the business and IT communities for this mission-critical job. Often, this leaves an equally critical skeleton crew to maintain the day-to-day business. The internal SME assets are assigned to the highest-value activities to drive present and future success. Precious few resources remain to conduct testing.

Cost Savings Most third-party system integrators (SIs) include testing in the scope of the implementation from a process perspective, as well as a resource perspective. This approach violates several quality assurance (QA) tenets if not handled appropriately (see Independent Testing below) but is quite profitable business for the SIs. A savvy IT executive will recognize that outsourcing the testing to another party can shave significant dollars from the budget.

Reduction of Risk through Independent Testing A test is “independent” when a person who has not been involved in the development or implementation of the software conducts it. This independence enables the tester to be more objective and readily carry out the fundamental goal of any testing activity—finding defects. Independent testing also provides the following benefits: ■



An “egoless” approach. People who have invested much time and effort in the build process are not the best people to test it. Independent testers do not have a vested interest in testing outcomes as do the developers and are, therefore, not as biased. Detection of more diverse types of errors. Independent testers are unlikely to test the software the same way as the build team, providing a greater likelihood of finding errors that the build team missed.

15_4782

2/5/07

11:28 AM

Page 321

Outsourcing the SAP Testing Effort ■



321

More controlled and disciplined management of the testing process. Formally trained and experienced independent testers establish a formal relationship between the build team and the test team. A fresh perspective on the requirements. A “second set of eyes” may reveal if the build team misunderstood or misinterpreted the requirements. Two groups will not likely have the same misconceptions.

Lack of “Know-How” Many organizations know or suspect that Quality Assurance process maturity is an area of need. “Knowing that you don’t know” is a positive step in the direction of reducing the risk that a mission-critical implementation goes awry. Leveraging a knowledgeable, skilled, and experienced testing partner is a form of insurance to protect the sizeable investment of the new system.

Shortage of Physical Space The consumption of all resources by an SAP project may extend to that of physical space. It is not untypical for contractors supporting the project to fill every cubical, office, conference room, and hallway. Finding the space for a testing team to plan, collaborate, and execute may be a logistical obstacle that can be addressed through outsourcing.

OUTSOURCING FACTORS Options for outsourcing the testing effort are numerous—each with its own set of benefits and drawbacks. Layered on top of the complexities of the overall system implementation, the successful outsourcing decision considers the following factors: ■

Cost. Focus not only on the hard dollars, but also on the return on investment. How quickly can success be realized within each option? How does each option achieve reusability for continued value?

15_4782

2/5/07

322 ■











11:28 AM

Page 322

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Risk. Consider how the sourcing strategy could positively or negatively impact the critical success factors of the project, as well as the critical success factors of the new system’s ability to support the business. Organizational readiness. How will the sourcing option impact the organization’s capacity to maximize the value of the testing service? Organizational readiness is a compilation of work ethics, relationships, management style, leadership, process maturity, and old baggage (successes and failures of the past). Will outside resources be accepted or rejected? Team dynamics. How will individual stakeholders perceive the various options as a personal win or loss situation? How do roles and responsibilities, reporting relationships, promotions, and hiring and firing factor into the sourcing decision? Technical factors. How do technical architecture, technical standards, industry knowledge, and regulatory requirements knowledge factor into in the sourcing decision? Testing capabilities. The decision maker should weigh the skills, knowledge, experience, and track record of the sourcing vendors. Bottom line: Can the job be done successfully? Business value. In summary, consider how the testing sourcing strategy will benefit or detract from business goals and project objectives. What constraints (budget, time, scope, and resources) must be balanced with potential benefits (return on investment [ROI], time-to-market, and quality)? In other words, “What makes the most sense for the current environment conditions?” and “Where do I get the biggest bang for my buck?”

OPTIONS FOR OUTSOURCING TESTING It is imperative that the organization aligns the business value and the impact of factors outlined above to the outsourcing decision. Once this is accomplished, the risk of making the wrong sourcing decision can be greatly reduced. The following are some options for outsourcing testing.

15_4782

2/5/07

11:28 AM

Page 323

Outsourcing the SAP Testing Effort

323

Deliverables-Based Project In this option, the sourcing party (supplier) agrees to solve a business problem (in this case, to complete testing of the system) in exchange for a fee. This is typically known as a “solution.” The buyer is exchanging a fee for the promise of a predefined outcome. The supplier takes on significant accountability and ownership of the risk in this arrangement in order to meet the buyer’s expectations. In order to meet these expectations, the supplier drives the approach, selects the team resources, manages and directs the resources on a day-today basis, and maintains the test environments as required. The general approach, deliverables definition, processes, communication, logistics, and other details are agreed to in advance by the buyer and supplier. As this arrangement is defined as a project, there is a definitive start and end, making this solution appropriate for an initial implementation or major release. This option is also viable for a buyer who is expecting “expert” services where the organization is lacking; or where the buyer wishes to shift ownership, management, and direction to a third party in order to free the internal staff to focus on other objectives.

Managed Service This option has the same characteristics of a deliverables-based project, but is delivered on a time-based arrangement—generally one to three years. The two approaches differ in that a deliverables-based project spans the life cycle of a development or maintenance cycle, while the managed service spans a calendar period. The effectiveness of a managed service is usually measured through service-level agreements (SLAs). Since testing is dependent, to a great degree, on the application itself, and the progress of the build (development and configuration) team, SLAs are sometimes difficult to measure. Ample thought should be given to the approach and effort for gathering data points and measuring SLAs against expectations. Many organizations zealously set too many expectations or set expectations that cannot be easily isolated or measured. Organizations should target a maximum of two to three SLAs for a managed service.

15_4782

2/5/07

11:28 AM

Page 324

324

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Some example SLAs include: ■ ■ ■

On-time test management reporting Test case development request response Customer satisfaction (measured by survey)

A managed service is ideally aligned with ongoing maintenance of the system. All things being equal, the buyer should look for the supplier to increase efficiencies over time, potentially reducing the cost of the service. As in the deliverables-based project, the managed service places ownership, management, and control with the supplier.

Staff Augmentation In this approach, the supplier provides a skilled and experienced resource who matches the buyer’s requirements. The buyer provides day-to-day direction for the resource and owns the testing approach, as well as the outcome of the testing process. Here, the buyer retains control of the resources. The buyer gains a resource that does not require training in the technology, but will need to learn corporate processes and adapt to the organization’s culture. Careful consideration must be given to the trade-off between these two knowledge areas.

Managed Staffing This arrangement is a multistaff augmentation approach with the benefit of administrative supervision. One or more of the staff augmentation team members are given supervisor responsibility. The staffing supervisor offloads administrative duties from the client and pushes down the directives of the client to the team. As in managed services, this type of agreement is time based, often spanning one to three years with possible contract renewal. A managed staffing arrangement is ideal for an organization that is in need of skilled resources, but wants to retain direction and control of the testing function. There aren’t any SLAs involved in this approach since the process and outcome are client driven.

15_4782

2/5/07

11:28 AM

Page 325

Outsourcing the SAP Testing Effort

325

ONSITE VERSUS OFFSITE/OFFSHORE The outsourcing approaches listed above can be delivered in the following ways: ■ ■ ■

Onsite in the buyer’s environment Offsite at a supplier’s test lab Offshore in a supplier’s environment

The key to achieving and maintaining business value in offsite/ offshoring lies in establishing the management structures enabling all parties to work together effectively. Gartner research shows that effective, integrated relationships are a key factor in delivering longterm success. Research also shows that: ■ ■

Good services integration increases flexibility and improves delivery. Poorly integrated relationships are expensive to manage and, in most cases, fail to deliver what the business needs.

Many organizations mistakenly believe that services will operate the same offshore as they would onshore, but at a much lower cost. In truth, risk of an offsite or offshore testing engagement rises significantly for a multitude of reasons. Many companies also assume that using an offshore vendor that has been assessed at Capability Maturity Model (CMM) Level 5 will allow the business to reach goals of cost savings while maintaining and even improving the quality of their IT products. After all, if the offshore vendor is CMM Level 5, they must do everything the right way. Why, then, have so many offshore initiatives failed to achieve the anticipated quality goals and cost savings? Offshore services must be delivered with innovative practices that bridge the chasm between a buyer who is probably not operating at CMM Level 5 and the offshore vendor that can operate at CMM Level 5. Without innovative processes driven by an Onsite Integrated Relationship team, the two parties are speaking two widely divergent languages and cannot be successful. The Integrated Relationship team is a critical component that bridges the local and remote team members. This team, which sits

15_4782

2/5/07

326

11:28 AM

Page 326

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

onsite, focuses on people, processes, and skill sets. Members of the onsite team must have a right balance of competencies: ■ ■ ■

Behavioral. Personal attributes and characteristics: “know why” Business. Business knowledge and awareness: “know what” Technical. Technology skills: “know how”

This onsite team, sourced from the supplier, is a critical element in the structure of any offsite or offshore testing arrangement and greatly enhances the chance for success. The approach to effectively integrate offsite or offshore resources into a cohesive project team involves several basic repeatable principles: accountability, verification, communication, repeatability, and continuous improvement. ■







Accountability. Roles and responsibilities must be clearly defined if a project combining near and offshore resources is going to be delivered on-time and within budget. Each resource must clearly understand their duties and how their respective actions influence overall project success. A project liaison role is essential to help facilitate this understanding by aggressively communicating and auditing all quality gates established by the project team. Verification. As is the case with any project, authenticating completed tasks is a critical project success factor. By verifying the accuracy and comprehensiveness of all completed tasks, the team is able to identify problem and high-risk areas early in the project life cycle and improve the odds of success sustainability. Communication. When utilizing an offshore provider, communication and cultural issues will likely surface. There is an abundance of research available detailing failed projects caused by poor communication. A communication strategy and plan must be developed and the work effort managed against the plan to assure that all team members receive information in a timely manner and understand content. Repeatability. This is an ageless key to long-lasting success. Processes governing all offshore work must be universally understood and practiced consistently. By implementing repeatable

15_4782

2/5/07

11:28 AM

Page 327

Outsourcing the SAP Testing Effort



327

processes, cycle time is improved, reducing costs, improving quality, and enhancing communication. Continuous improvement. Offshore service delivery is an everchanging effort. Processes, standards, and policies governing one client may not work for another. The service provider must continually evaluate process performance, identify improvement, and integrate these improvements back into the process.

INCREMENTAL TRANSITION: THE BOT MODEL Mature offsite/offshore vendors utilize a build–operate–transfer (BOT) model to incrementally transition the testing service from the client’s location to the offsite/offshore test lab. This is a crucial part of a successful testing solution and should not be cut short. The BOT model will include activities such as infrastructure planning and implementation, process development, resource training, piloting, monitoring, and reporting. In summary, best practices in offsite/offshore testing include: ■ ■ ■ ■ ■ ■

Effective process integration Structured communications Process monitoring Evaluation and feedback Incremental transition Continuous improvement

STRUCTURING THE TERMS OF THE TESTING SERVICE Service providers typically gravitate to a standard and consistent way to arrive at terms for the testing service that will allow them to deliver successfully, achieve customer satisfaction and referenceability, minimize risk, and make a reasonable profit. The level of complexity of the terms is most often directly related to the amount of risk and ownership assigned to the service provider.

15_4782

2/5/07

11:28 AM

Page 328

328

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Staff Augmentation As discussed earlier, staff augmentation bears the least amount of risk for the service provider and assigns sole ownership of the service outcome to the buyer. In this scenario, the supplier assists the buyer in identifying the knowledge, experience, skill sets, and traits of the resource and identifies a candidate for the client. This process offloads the enormous task of searching, qualifying, and screening candidates. Often, the supplier has many ways through which to source candidates that the buyer does not have at his/her disposal. Terms are typically limited to pay rate and duration of the assignment.

Solutions In other arrangements, the supplier is accountable for delivering an outcome for the engagement. This is referred to as a “solution.” Deliverables-based project, managed services, and managed staffing fall into the “solution” category. Since the supplier is held accountable to deliver an outcome, much analysis and planning is required. Steps to arrive at terms include: Step 1 Confirming the scope. Step 2 Fully understanding and validating the client’s requirements. Step 3 Architecting a solution that achieves the desired outcome, including definition of deliverables (test strategies and plans, test cases and scripts, test management reports, etc.). Step 4 Aligning testing tasks to the overall project schedule. Step 5 Sizing the team accordingly. Once approach and sizing is achieved, the supplier can provide a fixed cost or estimate. Pricing is typically structured as an hourly rate. Pricing by deliverables (i.e., test strategy, test scripts, test results reporting) is an option, but is rarely used, due to complexities in estimating. Testing has such a great dependency of many aspects of the software development life cycle that it does not lend itself well to pricing by deliverable.

15_4782

2/5/07

11:28 AM

Page 329

329

Outsourcing the SAP Testing Effort

Fixed Cost versus Estimate Service providers will generally allow the buyer to select fixed cost or time-and-materials payment terms. Some consideration of each is provided in Exhibit 15.1.

Fixed or Variable Resource Pool As in traditional project planning, the size of the resource pool is derived from the project duration, work effort, and other factors. If EXHIBIT 15.1 Fixed Cost versus Time-and-Materials Fixed Cost

Time and Materials

Definition

Costs are fully estimated in detail during the contract stage and the buyer is given a fixed or flat fee that can be paid on a variety of payment schedules.

Costs are estimated in advance and communicated to the client. The client pays for the services rendered, as they are received.

Relative Cost

Typically more expensive. The supplier may add a premium or contingency into the price to accommodate unexpected delays or problems. If the vendor brings the work in under schedule, the client still pays the predetermined price.

Typically less expensive. No contingency is needed since each hour worked is billed.

Relative Predictability of Cost

Higher. Since a fixed price is provided, budgeting can be more accurate.

Lower. Costs could come in higher or lower than budgeted.

Relative Flexibility

Less flexible. Scope and statement of work is followed rigidly so that the supplier can deliver under the cost constraint. This can be frustrating for the buyer who may not have fully planned for all conditions that might surface. Change orders can be implemented to address scope and SOW problems.

More flexible. Vendor should be managing the work closely (as if the estimate were a fixed cost) and advising the buyer whenever the budget is exceeded.

15_4782

2/5/07

330

11:28 AM

Page 330

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

the number of resources can be predicted with reasonable assurance, a fixed number of resources is likely to be the best approach to staffing the testing team. Sometimes, the resources needed may vary, driven by unanticipated events such as a sudden decline in system performance or urgent business process changes. Conversely, a variable resource pool may be needed to address predictable peaks in work, such as planned system releases or upcoming projects. A variable resource pool may be the best approach to meet predictable or unpredictable demands that require different levels of sourcing.

The On-Demand Resource Pool (Unpredictable) The supplier may accommodate this need by establishing a core team to handle the steady, predictable flow of testing needs, and complement this team with a set of resources on reserve. The resources on reserve are priced at a discount rate when not actively working. This rate is essentially a retainer fee. When demand calls for the need to utilize the reserve resource, a higher rate is invoked for the period of time utilized. This arrangement provides consistency in the resource assignment (reducing training time and startup), keeps the buyers costs lower, and affords great flexibility.

The On-Demand Resource Pool (Predictable) When peak demand is predictable and planned for, the supplier may invoke an approach to increase staff to address the demand, provided that ample advance notice is provided. The buyer will play a predictable fee for the on-demand resources, which can assist with budget planning. Depending on the factors like the supplier’s engagement portfolio, resources, and timing, the specific testers brought into the project may not have been trained and oriented to the client’s environment, requiring additional startup time.

15_4782

2/5/07

11:28 AM

Page 331

Outsourcing the SAP Testing Effort

331

Expenses Other costs that should be anticipated in an outsourced testing engagement include: ■



Lab fees. When the testing service is delivered offsite, the buyer should anticipate a test lab fee that covers office space, hardware, software, connectivity, office equipment, and other facility costs. Travel and expense. Resources may travel to deliver services on premises, or in the case of offsite/offshore, travel on occasion between the client’s location and the test location. These costs may be passed through to the buyer or factored into the fees.

Quality Management of Solution Engagements Solution engagements of all types are, by nature, more complex and should include some level of QA processes to ensure that the vendor is delivering as agreed. Mature service providers will include this service within the scope of the engagement.

LESSONS LEARNED FROM OUTSOURCING SAP TESTING ■



Consider engaging in a testing outsourcing strategy early, prior to contracting with the system integrator. Identify any overlap or conflicts in contractual terms and statements of work. Build in processes that allow each vendor to work effectively without negatively impacting the others. Unresolved issues are certain to delay the project, drive up costs, and require renegotiation of terms. Utilize a structured request for proposals (RFP) process to obtain, evaluate, and compare vendors. Ensure that a vendor conference is included in the process so that vendors have an opportunity to thoroughly understand the testing requirements and desired approach. Requiring a presentation from the top two or three vendors can help ensure that expectations are aligned between all parties.

15_4782

2/5/07

332 ■

■ ■

■ ■

11:28 AM

Page 332

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Request examples of past experience from both the supplier company, as well as the lead resources that are relevant for the project. Ask for references and follow up with those contacts. Evaluate expertise, experience, communication, flexibility, cost, and overall business value. In offsite and offshore assignments, do not shortcut the onsite relationship team functions. Follow a BOT process for incremental transition of the testing process. Ensure that your organization is ready to take on and maximize the investment of outsourcing. Allow business value to drive the decisions involved in selecting the outsourcing solution.

16_AppA_4782

2/5/07

11:30 AM

Page 333

APPENDIX

A

Advanced Testing Concepts his book so far has covered numerous testing strategies that can be implemented to allow for a successful SAP implementation and testing effort. This appendix will provide an overview of some of the more advanced testing concepts that can be implemented additionally to enhance the testing efforts. It will cover the Orthogonal Arrays Testing System (OATS), which describes a statistical approach to narrowing down test input data, plus a very effective way to automate (i.e., the keyword-driven automation approach). It will also touch on usability testing, including Section 508. Finally, this appendix will cover a test harness architecture.

T

ORTHOGONAL ARRAYS 1 It is generally not feasible or cost effective to test a system using all the possible variations and combinations of test parameter inputs. SAP implementations that have SAP variant configuration are prime examples of software implementations that have to test multiple combinations for creating a product based on different parameter inputs and data dependencies among the parameters. An example of SAP variant configuration would be an Internet user interested in purchasing an automobile over the Internet, which can cause the user to have hundreds, if not thousands, of combinations to build a car through a manufacturer’s website. Another example of a system requiring multiple combinations and test parameter inputs is a tax management system that required testing and contained a calculation engine that computed the depreciation of fixed assets (e.g., computers, airplanes, and office furniture) based on user-supplied information 1

Modified from Elfriede Dustin, “Orthogonally Speaking,” www.stickyminds.com.

333

16_AppA_4782

2/5/07

11:30 AM

334

Page 334

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

about the assets of the company. These types of computations are complex and sensitive to different combinations of a large number of possible input parameters. Some of these parameters are the “placed in service date” of the fixed asset, methods of depreciation, life of the asset, business use percentages, fixed asset costs, and calendar years— to name only a few of many possible parameters. Each parameter could have numerous data values. As a result, there are tens of thousands of potential input variations, each producing different results and making exhaustive testing of the calculation engine’s computations nearly impossible. There is no efficient or quick way to test any calculation engine source code changes since too many of the variables depend on each other. An approach to derive a suitable set of test cases when it is not feasible to use all the possible combinations and variations of test parameters is the test technique called the Orthogonal Array Testing System (OATS). This technique is very useful for finding a small set of tests (from a large number of possibilities) that exercises key combinations.

THE OATS SOLUTION OATS is derived from manufacturing techniques developed as part of the industrial engineering discipline. Orthogonal arrays are used as a mathematical tool in the Robust Design methodology described in Madhav Phadke’s Quality Engineering Using Robust Design and other books. The Robust Design methodology and design of experiments, created by Professor Genichi Taguchi, is in use in many modern areas of engineering. The OATS technique supports the system test effort by enabling test cases to be determined efficiently and uniformly. With this test technique testers are able to select the combinations of test parameters that will provide maximum coverage from test procedures, while using a minimum number of test cases. The assumption here is that tests that maximize the interactions between parameters will find more faults. The technique works. In the calculation engine testing, for example, OATS made it possible for the tax management developers to

16_AppA_4782

2/5/07

11:30 AM

Page 335

Advanced Testing Concepts

335

change their application’s calculation engine with more confidence by using automatically generated OATS test parameters that are fed into a test harness, which in return exercised the calculation engine. The engine’s outputs were captured and became the baseline for any future changes to the calculation engine. This test harness has proven to be very valuable, as it has uncovered many calculation differences caused by calculation engine source code changes. Moreover, the OATS procedure has given us an objective measure of testing completeness.

What Is an Orthogonal Array? This section introduces the idea of orthogonal arrays with an example. Suppose there are three parameters (A, B, and C), each of which has one of three possible values (1, 2, or 3). The effort to test all possible combinations involving the three parameters would require 27 test cases. Are all twenty-seven of those tests needed? Yes, if there’s a fault that depends on the precise values of all three parameters (a fault, for example, that would occur only for the case A = 1, B = 1, C = 1). But, because of the way programming works, it’s probably more likely that a fault will depend on the values of only two of the parameters. In that case, the fault might occur for each of these three test cases: A = 1, B = 1, C = 1, A = 1, B = 1, C = 2, and A = 1, B = 1, C = 3. Since the value of C in this example seems to be irrelevant to the occurrence of this particular fault, any one of the three tests will suffice. Given that assumption, the array in Table A.1 shows the nine test cases required to catch all such faults, in the most economical arrangement that will show all possible pairs within all three variables. The array is orthogonal because, for each pair of parameters, all combinations of their values occur once. That is, all possible pairwise combinations between parameters A and B, B and C, and C and A are shown. In terms of pairs, this array has a strength of 2. It doesn’t have a strength of 3 because not all three-way combinations occur; A = 1, B = 2, C = 3, for example, doesn’t appear. But it covers the pairwise possibilities, which is what pairwise testing is concerned with.

16_AppA_4782

2/5/07

11:30 AM

Page 336

336 TABLE A.1

1 2 3 4 5 6 7 8 9

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Sample Array A

B

C

1 1 1 2 2 2 3 3 3

1 2 3 1 2 3 1 2 3

3 2 1 2 1 3 1 3 2

Applying the Technique Before implementing orthogonal array testing, the test engineer needs to determine the size of the array required for the specific system test effort. The size of the orthogonal array is based upon the maximum number of values for all possible parameters. For demonstration purposes, let’s look at a simplified example of how we might use OATS to test our favorite tech bookstore’s applications. Table A.2 shows example parameters we believe might interact. They are: Type of Credit Card, Credit Card Number, Credit Card Expiration Date, Product Type Purchased, and Quantity Purchased. TABLE A.2

Bookstore Purchase Parameters and Values

Type of Credit Card

Credit Card Number

Credit Card Expiration Date (Years from Today)

Product Type Purchased

Quantity Purchased

Amex

Correct

50

Book

1

Discover

Incorrect Length

Invalid Year

Video

0

Visa

Invalid Digits

Today

Software

–1

Yesterday

Book, Software, Videos Book, Software

10

MasterCard

Invalid Character

1

16_AppA_4782

2/5/07

11:30 AM

Advanced Testing Concepts

Page 337

337

Each parameter has its own possible values that need to be tested in combination with the values of other parameters. The possible values pertaining to the Type of Credit Card used by a customer might include American Express, Discover, Visa, and MasterCard. The Credit Card Number entries could be correct or incorrect. All correct numbers are assumed to interact in the same way; that is, if one correct number reveals a fault when combined with a Discover card, all correct numbers will reveal the same fault when combined with a Discover card. However, incorrect numbers are assumed to interact differently, depending on whether they have an incorrect length or some invalid digits. Once the parameters and the values have been derived, we have to decide how parameters are likely to interact. If only pairwise interactions are likely, the array should have a strength of 2. In this case, it seems reasonable to say that pairwise testing is sufficient (“good enough”) for this type of application testing, so three-way testing doesn’t seem necessary. (Note that with a higher-risk application, one might want to consider selecting an array that allows for three-way or n-way input parameter testing.) An orthogonal array tool can be used to produce an orthogonal array such as the one in Table A.3. Each resulting row in the orthogonal array specifies one specific test case (without expected results). For example, in row number 0, a test case will be executed using American Express as the credit card, with a credit card number value of 402901517, with an expiration date of 2/13/2001, involving the purchase of one (1) book. [Note: The credit card numbers used here and in the accompanying table are truncated fictional numbers, so as to avoid any similarity to actual accounts.] Collectively, 25 test cases exercise all pairwise combinations. Exhaustive testing would have required 55, or 3,125 test cases. The test cases contain specific values, rather than markers like “incorrect length” or “invalid year.” To construct this example, use a script that replaces the respective values of the orthogonal array with the actual parameters and values needed for the project. While OATS is a useful tool, consider using additional testing techniques to derive your data elements when you’re determining actual values. Techniques such as boundary value analysis (e.g., selecting the maximum, minimum, one more than maximum, one more than minimum, or zero data) in combination with OATS can be a powerful

16_AppA_4782

2/5/07

11:30 AM

Page 338

338

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

technique. (It is common for errors to congregate around the boundary values.) In this example, one could pick the Quantity 100,000—because that’s the maximum number of any item that can be purchased at one time in this example—then try 99,999 (one less than maximum), 100,001 (one more than maximum), and so on.

TABLE A.3

Test Case Definitions

ID

Credit Card

Credit Card Number

Expiration Date

0 1 2 3

Amex Amex Amex Amex

402901517 123456789 11111111% WER11212p

2/13/2001 2/15/2001 5/15/2001 5/28/2050

4 5 6

Amex Discover Discover

542212345 402901517 123456789

3/16/2001 2/15/2001 5/15/2001

7 8 9 10 11 12 13 14

Discover Discover Discover Visa Visa Visa Visa Visa

11111111% WER11212p 542212345 402901517 123456789 11111111% WER11212p 542212345

5/28/2050 3/10/2001 2/13/2001 5/15/2001 5/28/2050 2/22/2001 2/13/2001 2/15/2001

15 16 17

MasterCard MasterCard MasterCard

402901517 123456789 11111111%

5/28/2050 3/08/2001 2/13/2001

18 19 20

MasterCard MasterCard Visa

WER11212p 542212345 402901517

2/15/2001 5/15/2001 3/26/2001

21 22 23 24

Visa Visa Visa Visa

123456789 11111111% WER11212p 542212345

2/13/2001 2/15/2001 5/15/2001 5/28/2050

Product Books Software Videos Books, Software, Videos Books, Software Videos Books, Software, Videos Books, Software Book Software Books, Software Books Software Videos Books, Software, Videos Software Videos Books, Software, Videos Books, Software Books Books, Software, Videos Books, Software Books Software Videos

Quantity 1 0 –1 10 1 10 1 1 0 –1 0 –1 10 1 1 1 1 0 –1 10 –1 10 1 1 0

16_AppA_4782

2/5/07

11:30 AM

Page 339

339

Advanced Testing Concepts

This example also illustrates an issue that often arises: unspecified values. Because there are fewer Type of Credit Card and Credit Card Number values than there are Expiration Date values, not all rows in the orthogonal array are required to exercise all the pairwise combinations involving them. For some of the rows, the value of Type of Credit Card or Credit Card Number is left to the discretion of the test engineer. The values can be chosen based on risk, highest usage, or highest problem area. Table A.4 provides an example. American Express might have been chosen because it has been the card the bookstore’s application traditionally has had the most trouble with, or is used most often. A correct Credit Card Number might have been chosen because incorrect number input might not have been an issue in the past. In some cases combinations of test parameters and values can be invalid, and an invalid test case combination is generated (depending on business logic). For example, with three parameters (A, B, and C), it might be invalid for both A and C to have a value of 1, but the OATS tool would still generate that combination. In that case it is up to the test engineer to make a decision. Execute the test case as is (garbage in and garbage out) and determine how the system handles invalid combinations of input or decide to not use the invalid test case combinations—in order to shorten the test case evaluation cycle, or in cases where the back-end system doesn’t allow for invalid input combinations. But choose carefully: If you throw out the invalid cases, you might also be throwing out other combinations. For example, the A = 1, C = 1 case might be the only row containing A = 1, B = 3 (a valid combination that won’t be tested if you throw out the row). And if you throw out the invalid cases, you won’t know how the system behaves given these invalid combinations. In the case study of the calculation engine of the asset management system, input of invalid test combinations was allowed. The TABLE A.4

Sample Combination

Type of Credit Card

Credit Card Number

Credit Card Expiration Date (Years from Today)

Product Type Purchased

Amex

Correct

Invalid Character

Books, Software

Quantity Purchased 1

16_AppA_4782

340

2/5/07

11:30 AM

Page 340

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

calculation engine was expected to produce consistent results among the various builds, whether the input was valid or invalid. Please note that Table A.4 is a simplified excerpt, one that is small and readable, of a sample test case combination. In addition to the parameters illustrated here, the bookstore might also wish to track the number of books that remain in inventory following the purchase, or be able to query the purchase status for a particular customer. Test professionals should review the resulting test cases and add additional test cases based on known risk areas. In our test program for the asset management calculation engine, we executed a test harness that generated over 17,000 test cases using OATS. A software program was then developed that incorporated these test cases and applied them to the back end of the Web application one by one. (Such a software program might be tailored to create a particular load on the system, or to simply verify baseline functionality.)

KEYWORD-DRIVEN AUTOMATION APPROACH 2 A keyword-driven automation approach to testing is similar to the use of a data template where this approach makes use of a data input file that consists of keywords. Not only is data input from a file, but also the associated controls, commands, and expected results. As a result, test script code is separated from data, which minimizes script modification and maintenance effort. When using this approach, it is important to separate the action of determining “what requirements to test” (simple user commands) from the effort of determining “how to test the requirements” (actually implementing the code to perform the command). The functionality of the application, selected for testing, is documented within a table to include the step-by-step instructions for each test. See Table A.5, Keyword-Driven Automation. Once the table has been created, a simple parser can be created that reads the steps from the table, while the keyword determines how to execute each of the steps by calling the specific function and performs error checking based on the error codes returned, among 2 Adaptation of the data-driven approach discussed in Elfriede Dustin, 1999, Automated Software Testing, Reading, MA: Addison-Wesley.

16_AppA_4782

2/5/07

11:30 AM

Page 341

341

Advanced Testing Concepts

TABLE A.5

Keyword-Driven Automation

Table Used to Generate Automated Testing Skill Window (VB Name)

Window (Visual Test)

Control

Action

Arguments

StartScreen

XYZ Savings Bank



SetContent



PrequalifyButton

Prequalifying

PushButton

Click



frmMain

Mortgage Prequalifier



SetContext



FrmMain

File



MenuSelect

New Customer

other features. The parser extracts information from the table for the purpose of developing one single (large) test procedure. The parser code is depicted below. Script That Makes Use of Keyword-Driven Automation Window SetContext, “VBName=StartScreen;VisualText=XYZ Savings Bank”, “” PushButton Click, “VBName=PrequalifyButton;VisualText=Prequalifying” Window SetContext, “VBName=frmMain;VisualText=Mortgage Prequalifier”, “” MenuSelect “File->New Customer” ComboListBox Click, “ObjectIndex=” & TestCustomer.Title , “Text=Mr. “ InputKeys TestCustomer.FirstName & “{TAB}” & TestCustomer.LastName & “{TAB}” & TestCustomer.Address & “{TAB}” & TestCustomer.City InputKeys “{TAB}” & TestCustomer.State & “{TAB}” & TestCustomer.Zip PushButton Click, “VBName=UpdateButton;VisualText=Update” . . . ‘End of recorded code

The test team could create a GUI Map containing entries for every type of GUI control that the testing would need to address. Controls would include every push button, pull-down menu, dropdown box, and scroll button. Each entry in the GUI Map would contain information on the type of control, the control item’s parent window, and the size and location of the control in the window. Each entry would contain a unique identifier similar in concept to control IDs. The test engineer uses these unique identifiers within test scripts in a similar way in which object recognition strings are used.

16_AppA_4782

2/5/07

11:30 AM

Page 342

342

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

The GUI Map serves as an index to the various objects within the GUI and the corresponding test scripts that are available to perform tests on the objects. The GUI Map can be implemented in several ways to include the use of constants or global variables. Every GUI object is replaced with a constant or global variable. The GUI Map can also be supported through the use of a data file, such as a spreadsheet. The map information can then be read into a global array. By placing the information into a global array, the map information is available to every test script in the system and can be reused and called repeatedly. In addition to reading GUI control data from a file, expected result data can also be placed into a file and retrieved. This way, the automated test tool can make an automatic comparison of the actual result produced by the test to the expected result maintained within a file. When developing this keyword-driven approach, it is important that the test team keeps in mind the size of the application, the size of the test budget, and the return of investment that can be expected from applying a data-driven approach. Consider the example of the test engineer named Bill, who demonstrated a keyword table-driven approach at a test tool user group meeting. Bill had developed a significant number of scripts to support a keyword-driven approach, when the application that he was trying to test in an automated fashion was quite simple. The application that he was testing performed simple record add, delete, and update functions. It would have been much more efficient to simply use a data file to enter the various records, which amounted to a test development effort of no more than a half hour. The resulting script in turn, could be reused as often as necessary. The keyword-driven approach had taken Bill two weeks to develop.

More on Keyword-Driven Testing 3 Keyword-driven testing is actually a concept known by many names. It is sometimes called “table-driven testing,” “action-based testing,” and even “data-driven testing” in some contexts or in the user’s man-

3

Contributed by Carl Nagle.

16_AppA_4782

2/5/07

11:30 AM

Page 343

343

Advanced Testing Concepts

ual from many commercial vendors of automated test tools. In the world of software test automation, two of the most important things to know about keyword-driven testing are that it provides for the separation of certain roles in the testing process, and it allows us to separate our key test assets from the tools that will execute them. When talking about the separation of roles we are talking about the ability to retain nonprogrammer testers for the role of Test Designer, while allowing for a separate role in the Test Automator, if desired. At the highest level, the nonprogrammer in test tools, the Test Designer (i.e., Test Designer is the equivalent of the business analyst, SAP configuration expert, or subject matter expert) is able to express executable keyword-driven tests in the vocabulary most suited to the application. There is no need to learn a specific tool’s programming language because the keyword-driven tests are not written for any specific tool to execute. In fact, the high-level keyword-driven tests as shown below are even suitable for manual execution. Here is a simple example of high-level keyword-driven test instructions: Keywords

Parameters

LoginAsUser

“admin”

“adminPassword”

AddEmployee

“John”

“Smith”

VerifyEmpID

“12345”

As we can see, the high-level keyword-driven tests are easy to develop and easy to interpret. They can be written using familiar text editors, spreadsheet tools, or table editors. A manual tester or test auditor should have no problem understanding what the test is intended to do. Some keyword-driven automation frameworks go only this far. The role of the Test Automator (i.e., the person who brings expertise in the automated test tools) is then to create the execution engine that can interpret the above tests and call the appropriate automation tool functions to accomplish each task. In this scenario, an execution engine must be written specifically for the application being tested in the language of the tool that is going to test it. When it is time to test another application, a new execution engine must be written to support it. Extended keyword-driven frameworks go a step further and implement execution engines that are not at all tied to the application

16_AppA_4782

2/5/07

11:30 AM

344

Page 344

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

being tested. They provide an additional low-level layer of keyword support that allows even the Test Automator to develop test assets that are independent of the tool that will execute them. This means the execution engine need only be created once and it can be used to test any number of applications. In addition, the execution engine can be written for different automation tools and the tests themselves do not have to change. This effort is always much smaller than rewriting all the automated tests for all the tested applications using other test automation techniques. As we will see in the next example, the most basic of test automation steps are available as low-level keywords in an extended keyword-driven framework. A Test Designer’s “LoginAsUser” test would be implemented by the Test Automator using the low-level keyword instructions provided by the extended framework: Keywords

Parameters

SetText

LoginWindow

UserName “admin”

SetText

LoginWindow

Password “adminPassword”

Click

LoginWindow

Submit

As should be evident, the role of the Test Automator in this scenario is not to write a new execution engine but is, instead, to exploit the extended execution engine that already exists. These low-level assets are still in the simple text format and are not tied to any particular test automation tool. We leverage an execution engine that knows how to interpret these simple commands. If the need to migrate to a new testing tool ever occurs, we need only to write a comparable execution engine and all of the tests for all of our applications are still valid.

Pros and Cons of Keyword-Driven Testing While this all sounds great, there are some trade-offs that occur by going the keyword-driven route: ■

No automated scripting. This is not a record-and-playback scenario. Tests must be planned and developed in the keyworddriven format.

16_AppA_4782

2/5/07

11:30 AM

Advanced Testing Concepts ■





Page 345

345

Longer to develop tests. Because there is no record option for this type of testing, it does tend to take longer to put the test in place than it would for a simple recorded script. Framework learning curve for new Test Automators. In addition to learning the nuances of the automation testing tool itself, the Test Automator must learn about the additional keyword-driven layer sitting on top of it. Framework from scratch can be cost prohibitive. If the project cannot leverage an existing framework, then one must be created from scratch. This can take weeks to complete and often this has not been factored into existing project schedules.

Keyword-driven testing, however, does provide tremendous advantages over more traditional automation techniques when used in the proper context. Some of these benefits include: ■







Tests easy to read/enhance. We do not need to be programmers to create or interpret the tests. Thus, manual testers and test auditors can review the tests and actually understand and even execute them. Separates test development from automation tools. Because this is not a record-and-playback technique, test development by nonprogrammers can actually begin long before the application has been delivered for testing. And since the keyword-driven tests are not coded in the programming language of any specific testing tool, the tests can migrate from tool to tool over time whenever necessary. (It is even possible to use multiple automation tools— each handling the part of the test it is most capable to handle.) Testers can migrate across projects more readily. Where keyworddriven testing is used throughout an organization, testers can readily migrate from project to project regardless of the test automation tools that might be deployed in different areas. A tester writing keyword-driven tests executed by Vendor A can just as easily write keyword-driven tests executed by Vendor B because the keyword-driven tests are not tied to either tool and the editors used to write the tests are the same for all execution engines. Framework is application independent. The automation tool libraries used to execute keyword-driven tests can be made

16_AppA_4782

2/5/07

11:30 AM

346

Page 346

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

entirely generic and independent of the tested applications. This is an extraordinary level of code reuse, robustness, and maturity providing significantly reduced long-term maintenance costs. For more detailed information on keyword-driven testing, read the whitepaper on Test Automation Frameworks at http://safsdev .sourceforge.net.

USABILITY TESTING 4 Usability testing evaluates the human factor, or usability problems. Evaluating for usability helps to measure whether usability goals are met. One of the early references to a usability engineering methodology was offered by Gould and Lewis (1985). Gould and Lewis describe a very general approach to usability engineering involving three global strategies: 1. Early focus on users and tasks. This strategy involves applying such tasks as user profiling, task analysis, prototyping, and user walkthroughs. 2. Empirical measurement. Here such tasks and techniques as questionnaire administration, laboratory and field usability studies, and usage studies represent some of those available for collecting objective, quantitative performance, and satisfaction data. 3. Iterative design. Systems built using a User Interface Management System (UIMS) allow radical changes to the interface (as opposed to the application code itself) to be made quickly and easily in response to empirical data. This makes iterative testing and redesign feasible. Usability tests5 are performed in order to help verify that the system is easy to use and that the user interface appearance is appealing. Usability tests consider the human element in system operation. The 4

Adapted and modified from Elfriede Dustin, 1999, Automated Software Testing, Reading, MA: Addison-Wesley.

5 Modified from Elfriede Dustin, 2002, “Usability Testing,” in Effective Software Testing, Addison Wesley, 2002.

16_AppA_4782

2/5/07

11:30 AM

Advanced Testing Concepts

Page 347

347

test engineer needs to evaluate the application from the perspective of the end user. Test development considerations for usability tests include approaches where the user executes a prototype of the actual application but the real functionality has not been built. By running a capture/ playback tool in capture mode while the users are executing the prototype, recorded mouse movements and keystrokes can track where the users move and how the users would execute the system. Reading these captured scripts can help the designers understand the approach of the usability of the application design. Inadequate attention to the usability aspects of an application can cause an application to have a poor acceptance rate among end users, based on the perception that it is not easy to use, or doesn’t perform the necessary functions. This can lead to increased technical support calls, and can negatively affect application sales or user acceptance. Usability testing is a difficult but necessary part of delivering an application that satisfies the needs of its users. The primary goal of usability testing is to verify that the intended user base of the application is able to interact properly with the application, with a positive and convenient experience. This will require an examination of the layout of the application’s interface, including navigation paths, dialog controls, text, and other elements as necessary, such as localization and accessibility testing requirements. In addition, supporting components such as the installation program, documentation, and help system must also be investigated. In order to properly develop and test an application for good usability, it is necessary to gain an understanding of the target audience of the software and their needs. This information should be prominently featured in the application’s business case and other high-level documents. There are several ways to determine the needs of the target audience from a usability perspective: ■

Hire subject matter experts. Having staff members that are also experts in the domain area is a necessity in the development of a complex application. These staff members will be able to counsel the requirements, development, and testing teams on a continual basis, which can be a great asset to the effort. It is usually necessary to have multiple subject matter experts on hand, since opinions on certain domain-related rules and processes could differ.

16_AppA_4782

348 ■







2/5/07

11:30 AM

Page 348

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Focus groups. An excellent way to get end-user input on a proposed user interface is to hold focus group meetings with potential customers to get their feedback on what they would like to see in an interface. Prototypes or screenshots are useful tools to use in a focus group discussion. It is important to make sure that the members of the focus groups are representative of all of the actual end users of the product, so that adequate coverage is achieved. Surveys. Although not as effective as the above approaches, surveys can yield useful information about how potential customers would use a software product to accomplish their tasks. Similar products. Investigating similar products can provide information on how the problem has been solved by other groups in other problem domains, as well as the same problem domain. Although user interfaces should not be blatantly copied from another product, it is useful to see how other groups or competitors have chosen to approach the user interface of the application. Observation. Monitoring a user’s interaction with an application’s user interface can provide a wealth of information about its usability, which can be accomplished by simply taking notes while the user works with the application, or videotaping the session for later analysis. This will enable the usability tester to see where the users stumbled with the user interface, and where they found it intuitive.

As with most nonfunctional requirements, early attention to usability issues can produce much better results than attempting to retrofit the application at a later time. Some application designs and architectures may not be suitable for the required user interface, and therefore would be difficult to change later if it is determined that the application is regarded as poor from a usability perspective. In addition, a large amount of time and effort is expended to craft the application’s user interface, so it is wise to specify the correct interface as early as possible in the process. An effective tool in the development of a usable application is the user interface prototype. Developing this kind of prototype allows interaction between potential users, requirements personnel, and developers to determine the best approach to the application’s interface. Although this can be done on paper, prototypes are the best approach since they are interactive, and give a “preview” of what the application will look like. Prototypes, in conjunction with requirements

16_AppA_4782

2/5/07

11:30 AM

Page 349

Advanced Testing Concepts

349

documents, can also provide an early basis for developing test procedures. During the prototyping phase, usability changes can be implemented without much impact to the development schedule. Later in the development cycle, end-user representatives or subject matter experts should participate in the usability tests. If the application is targeted at multiple types of end users, then at least one representative from each group should take part in the tests. Participation can take place at the site of the software development organization, or it could be done using a prerelease version of the software sent to the end user’s site, accompanied by usability evaluation instructions. Each end user will note areas where they don’t find the interface usable or didn’t understand parts of it, and/or provide feedback on how it could be improved. Remember that at this stage in the development life cycle, large-scale changes to the application’s user interface are typically not practical, so only refinements should be targeted here. A similar approach can be taken for an application that is already in production. Feedback and survey forms are useful tools in determining what usability improvements should be made for the next version of the application. This type of feedback can be extremely valuable, since it is coming from paying customers who have a vested interest in seeing the application improved to meet their needs. Another aspect of usability is Section 508,6 which refers specifically to Section 508 of the Rehabilitation Act of 1973, as amended by the Workforce Investment Act of 1998 (to learn more about Section 508 please visit www.section508.gov/). SAP implementations in the U.S. federal and Department of Defense sector are subjected to Section 508 compliance. The law requires federal agencies to purchase electronic and information technology that is accessible to employees with disabilities, and to the extent that those agencies provide information technology to the public, it too shall be accessible by persons with disabilities. Actually Section 508 was included in an amendment to the Rehabilitation Act in 1986, with the requirement that the federal government provide accessible technology to employees and to the public. But the 1986 version provided no guidance for determining

6

www.access-board.gov/sec508/standards.htm.

16_AppA_4782

2/5/07

11:30 AM

350

Page 350

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

accessibility of information technology and there were no enforcement procedures. The 1998 amendment addressed both these issues. If an application has to be Section 508 (accessibility) compatible, there are numerous tools on the market that allow for Section 508 compatibility testing, such as Bobby, described at http://www.jim thatcher.com/testing4.htm.

TEST HARNESS Some components of a system can be tested only by developing a test harness. For example, consider the tester who is designing tests for a calculation engine that allows for hundreds and thousands of input combinations, as described in the section on Orthogonal Arrays. This type of testing will require a different test design from user interface or black box testing. Since the combinations and variations of inputs to the calculation engine are too huge to consider testing through the user interface, due to speed and other issues, it may be necessary to develop a test harness to directly test the calculation engine. It will require a large set of input values and verify the output. A test harness is a tool that performs automated testing of a core component of a program or system. It could be developed to allow deeper testing of core components. Usually written in a more robust programming language, such as a standalone Java, C++, or VBA program, a custom-built test harness will typically be faster and more flexible than an automated test tool script, which may be constrained by the test tool’s specific environment. A test harness could also be used to compare a new component against a legacy component or system. Often, two systems do not use the same data storage format and have different user interfaces using different technologies. Therefore, any automated test tool would need a special mechanism, or require a duplicate automated test script development effort, in order to run identical test cases on both systems and generate identical (or at least comparable) results. In the worst case, duplicate test scripts would have to be developed using two different sets of automated testing tools, if one tool is not compatible with both systems. Instead, a custom-built, automated test harness could be written that encapsulates the differences between the two systems into separate modules, and allows targeted testing to be performed against both systems. Typically, the test harness will in-

16_AppA_4782

2/5/07

11:30 AM

Page 351

351

Advanced Testing Concepts

teract with each system below the user interface, to achieve optimum performance and stability. An automated test harness could take the baseline of the test results generated by a legacy system and automatically verify the results generated by the new system by comparing the two result sets and outputting any differences. One way to implement this is to use a test harness adapter pattern. A test harness adapter is a module that “adapts” each system under test to be compatible with the test harness, which executes predefined test cases against systems, through the adapters, and stores the results in a standard format so that results can be automatically compared from one run to the next. For each system to be tested, a specific adapter must be developed that is capable of interacting with the system, directly against its DLLs or COM objects, for example, and executing the test cases against it. Note that, for example, to test two systems with a test harness, it would require two different test adapters, and two separate invocations of the test harness, one for each system. The first invocation would produce a test result, which would be saved and then compared against the test result for the second invocation. Exhibit A.1 depicts a test harness that is capable of executing test cases against a legacy system and a new system.

Test Cases

Test Harness Adapter (Legacy System)

Legacy System

Test Harness Adapter (New System)

Legacy System

Test Harness

Test Result

EXHIBIT A.1 Test Harness Basic Architecture

16_AppA_4782

352

2/5/07

11:30 AM

Page 352

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Identical test cases can be run against multiple systems using a test harness adapter for each system. The adapter for a legacy system can be used to establish a base set of test results against which the results for the new system can be compared. The test harness adapter works by taking a set of test cases and executing them in sequence directly against the application logic of each system under test, bypassing the user interface. This allows for maximum throughput of the test cases. Results from each test case are stored in one or more results files, in a format, such as XML, that is the same regardless of the systems under test. Result files can be retained for later comparison to the results files generated in subsequent test runs. To compare the results of the tests, a custom-built results comparison tool knows how to read and evaluate the result files, and output any errors or differences found. It is also possible to format the results so they can be compared with a standard “file diff” tool. As with any type of tests, test harness test cases may be quite complex, especially if the component tested by the harness is of a mathematical or scientific nature. Since there are sometimes millions of possible combinations of the various parameters involved in calculations, there are also potentially millions of possible test cases. Given time and budget constraints, it is unlikely that all possible test cases will actually be expressed and tested; however, it’s likely that many thousands of test cases will be developed and executed using the test harness. With thousands of different test cases to be created and executed, test case management becomes a significant effort. Detailed below is a general strategy for developing and managing test cases for use with the test harness, which is also applicable to other parts of the testing effort. ■

Creating test cases. Test cases for a test harness are developed in the same fashion as test cases for manual testing, using various test techniques. A test technique is a formalized approach to choosing the test conditions that give a high probability of finding defects. Instead of guessing at which test cases to choose, test techniques help testers derive test conditions in a rigorous and systematic way. A number of books on testing describe testing techniques such as equivalence partitioning, boundary value

16_AppA_4782

2/5/07

11:30 AM

Advanced Testing Concepts



Page 353

353

analysis, cause–effect graphing, and others7 and a brief overview is provided here: ● Equivalence partitioning identifies the ranges of inputs and initial conditions that are expected to produce the same results. Equivalence relies on the commonality and variances among the different situations in which a system is expected to work. ● Boundary value testing is used mostly for testing input edit logic. Boundary conditions should always be part of your test scenarios, since it has been proven that many defects occur on the boundaries. Boundaries define three sets or classes of data: good, bad, and on the border (in-bound, out-of-bound, and onbound). Boundary testing uses values that lie in or on the boundary, such as endpoints, and maximum/minimum values, or field lengths. ● Cause–effect graphing8 is a technique that provides a concise representation of logical conditions and corresponding actions, represented in a graph with the causes on the left and the effects on the right. ● Orthogonal array testing enables the selection of the combinations of test parameters that provide maximum coverage from testing procedures, using a minimum number of test cases. Test cases using orthogonal array testing can be generated in an automated fashion. (See the section on Orthogonal Array testing.) Establishing a common starting point. All test cases must establish a well-defined starting point that is the same every time the test case is executed. Setting up a template with record types, types, and record fields, and then creating a new set of records using this template before running a series of test cases can provide this common starting point. When the modular test components are reused, they will be able to hand off the application in the same way they found it, for the next test component to run. Otherwise, the second test component will always fail since the assumed starting point is incorrect.

7 See Boris Beizer, 1990, Software Testing Techniques, International Thomson Computer Press; also see G. J. Myers, 1979, The Art of Software Testing, New York: John Wiley & Sons. 8

G. J. Myers, 1979, The Art of Software Testing, New York: John Wiley & Sons.

16_AppA_4782

354 ■

2/5/07

11:30 AM

Page 354

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Manage test results. Test scripts produce a test result for every transaction set they execute. The test results are generally written to a file. A single test script can write results to as many files as desired, though in most cases a single file should be sufficient. After running a series of test cases, a number of files containing test results are created. Since running any given test case should produce the same results every time it is executed, the test results files can be compared directly via a simple file diff, or by using a custom-developed test results comparison tool. The differences that this comparison produces need to be evaluated, and defects need to be determined, documented, and tracked to closure.

A custom-built test harness can provide a level of testing above and beyond that of automated test tool scripts. Although a test harness can be time consuming to create, it will allow deeper coverage of sensitive application areas, and also allow two applications to be compared.

17_AppB_4782

2/5/07

11:32 AM

Page 355

APPENDIX

B

*

Case Study: Accelerating SAP Testing AP testing accelerators are a new trend from software testing vendors to introduce or facilitate automation testing efforts. SAP test accelerators are a prebuilt library of previously automated test cases representing SAP test transactions that can be customized or modified to meet a project’s specific and unique configuration settings. SAP test accelerators hold the promise of reducing the cycle time to automate SAP end-to-end processes (i.e., hire-to-retire, request-to-pay, etc.) while empowering the SAP project’s nontechnical members to assemble and execute automated test cases. Although SAP testing accelerators ostensibly offer superior benefits over traditional SAP automation efforts whereby SAP transactions are recorded from scratch, they also have potential drawbacks that are often obscure and can hamper automation progress. Many of the drawbacks from SAP testing accelerators are overcome with what is known as a “next-generation accelerator.”

S

BACKGROUND Test accelerators refer to prebuilt and generically recorded test cases that could be used to test packaged enterprise business applications. Accelerators typically provided most, if not all, elements necessary to test an entire end-to-end business process such as order-to-cash. The original thinking was that if a single application was deployed at many locations, a single out-of-the box library of prerecorded test cases that can be modified as needed would accelerate the implementation of test automation by providing prebuilt content in a proven framework to the SAP user community. 355

17_AppB_4782

2/5/07

11:32 AM

356

Page 356

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

Initial test accelerators’ assets focused on screen logic, screen elements, and test scripts. This made it possible for companies to reconfigure and edit these preexisting assets to reflect the unique configuration settings established at each SAP installation. This further allowed test developers to greatly reduce the effort associated with building a test asset development framework and automated test cases from scratch. This reusability of test assets was a tremendous benefit to the test script developer. By reusing the fundamental screen elements it was possible to quickly put together many different test scripts in a short period of time.

CHALLENGES While test accelerators were an improvement over traditional automation efforts of developing automated test cases from scratch, they still suffered from the following four main problems: 1. 2. 3. 4.

Limited system validation Increased maintenance High costs Complex data management

Traditional SAP test accelerators do not embed sufficient programming logic for validating business processes or validating processes at the back end of the application. An effective test acceleration solution must incorporate test asset maintenance in its thinking. When we say maintenance, we are referring to change management and control. Changes are a natural part of any business process and these changes percolate down to the test assets as well. For a test accelerator to be effective, it must contemplate this reality and provide a solution to easily manage, modify, and evolve with the changing SAP business processes. The current model most graphical user interface (GUI) test tool providers use for managing data for a test script is a spreadsheet. For each SAP transaction in a test script they will associate a test script to input the data and another spreadsheet for validating the results of that transaction. Exhibit B.1 is a diagram of an order-to-cash (OTC)

Balance Display

Display Acct. Document

Cust. Acct. Balance Display

Balance Display

Incoming Payment

Delivery

Display Acct. Document

Cust. Acct. Balance Display

Stock Overview

Display Acct. Document

Display Condition

EXHIBIT B.1 Decomposition of Order-to-Cash Scenario

Stock Overview

Display Customer

Display Material Document

Display Sales Order

Cust. Acct. Balance Display

Billing

Goods Issue

Display Sales Order

LEGEND Execution Validation

Display Outbound Delivery

11:32 AM

Sales Order

2/5/07

Order to Cash Scenario 1

17_AppB_4782 Page 357

357

17_AppB_4782

2/5/07

11:32 AM

Page 358

358

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

end-to-end scenario encompassing multiple SAP transactions strung together. The SAP test accelerator will offer a series of automated test cases for each transaction linked together to form a single test script for the complete business process. Each transaction requires a spreadsheet to drive the execution. It is likely that the end-to-end process in Exhibit B.1 for OTC will have over 20 spreadsheets associated with it. Considering that an organization may have 20 different OTC scenarios that must be tested, it is possible to have hundreds of spreadsheets containing the test data for just OTC. Existing SAP test accelerators have prebuilt libraries that are generic and therefore any economies of scale are limited. For example, every time a test scriptwriter constructs an automated test case for entering an order through an SAP transaction such as VA01 (for sales order creation), it is largely a unique activity subject to the specific SAP configuration settings under which the process was automated. When one multiplies this effort across all the transactions that are part of a typical SAP end-to-end scenario, it becomes obvious that there is a lot of labor involved in constructing and modifying automated test cases from the SAP test accelerators. Current tool vendors do not want to point this out because they want to sell you their tools and SAP test accelerators. Service vendors do not want to point this out because they would rather maximize their profits from billable hours associated with supporting and maintaining test cases derived from SAP test accelerators.

AN ENHANCED APPROACH Now that we have identified some of the issues with the current paradigm of SAP test automation and first-generation test accelerators, let us look at how one assembles a better solution through nextgeneration SAP test accelerators. We will look at new test script creation methods, new methods for managing changes to test assets, new concepts for managing test data, more efficient techniques for performing lights-out testing, and a different cost model that makes test automation generate a respectable return on investment (ROI), all of this done within the context of a new SAP-centric test acceleration paradigm. Furthermore, the new paradigm includes the concept of a

17_AppB_4782

2/5/07

11:32 AM

Page 359

Case Study: Accelerating SAP Testing

359

labor cost model that consists of one-time test case automation that is distributed everywhere. One-time test case automation that is delivered everywhere implies there is an inherent leverage in every test case that is automated. In next-generation test accelerators, this is accomplished through the use of test components. Test components can be thought of as automated test cases that have the functionality to test all of the configuration permutations of the SAP transaction that they test. For example, the same test component for SAP transaction VA01 can be used to test SAP transaction VA01 at various SAP implementations regardless of the SAP configuration settings for transaction VA01. This aforementioned test component could then be reconfigured to mirror the specific configuration of VA01 at each SAP installation. The labor associated with the construction of the test component is then distributed across various SAP implementations that have SAP transaction VA01 as part of their functional scope. Test components are capable of testing all the different configuration settings of an SAP transaction. Each test component corresponds to a SAP transaction code so a succession of components can be quickly strung together to test the end-to-end business process. To customize this sequence to your specific SAP configuration, the test developer selects from a table the screens used in a given transactions and the fields used for each screen. In other words, they configure the component to match the configuration of the transaction code. There are many benefits to an automated test library for SAP that is constructed of transaction-level components. The first is naturally the cost as the leverage of automate-once and distribute-everywhere is intuitive. Another, equally important benefit is the implied framework inherent in its structure. The configuration and implementation raises the discussion from a technical, GUI test tool level up to a business-process level more common with the tenets of SAP. A third benefit is that this implied structure allows for changes to test assets in a much more familiar and comfortable manner. Changes such as field additions or deletions and screen additions or deletions do not need to happen at a code level but through a forms-based selection process, thus eliminating test script authoring entirely from the process. This simple change reduces cycle time dramatically by reducing the

17_AppB_4782

360

2/5/07

11:32 AM

Page 360

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

mental processing time needed in authoring a test script. In fact, no test script authoring is necessary. The time savings, efficiency, and accuracy for configuring these test components versus constructing an automated test case are analogous to your schoolday preference for taking a true/false test versus an essay test. In short, it is just faster, cheaper, and better. Next-generation accelerator pricing has driven the cost of SAP test automation down to the cost of approximately one person-year of effort to cover the majority of the core SAP critical business processes. Even for the smallest of installations, the gain of efficiency is hard to summarily dismiss as too expensive. Even the most skeptical are wise to take a closer look. In a next-generation accelerator test, components not only accommodate test execution but they perform validation as well. By building validation into the test component, a tester does not have to perform endless screen reads to retrieve the validation values for a field. A test component simplifies the process of field validation because it knows the location of the field within the SAP database and reads its value directly from the SAP table. It is also possible to identify additional validation elements in other transactions, which may be useful for validating (technically this is validation not verification) the actual results with the expected results. Retrieval of this data can be specified quickly and easily if required to augment the prebuilt validation associated with each transaction code. Most important, it institutionalizes the validation knowledge of the functional experts in the test component. This fact provides the greatest value both in time savings and domain expertise. By building validation into each test component, there is a significant time savings when constructing an end-to-end test of a business process. Instead of working at a test script level, the construction is done at a business level by choosing the transaction that needs to be tested without much effort focused on the validation of the data, since it is built into the test components. This technique is possible only through the use of a SAP test accelerator, which leverages off components to create the test scenarios and a next-generation test accelerator as it permits the use of validation that is built into each component. The means by which these components retrieve data from the SAP database is through the use of a validation engine. A validation engine is a software mechanism

17_AppB_4782

2/5/07

11:32 AM

Page 361

361

Case Study: Accelerating SAP Testing

that permits direct access to the SAP database to retrieve values. It works in conjunction with a test component library that makes a data retrieval request of the validation engine. As a test is being executed, a test component starts the execution of a test script inside the GUI test tool. That execution goes to a screen transaction displayed in the SAP GUI and inputs the execution data for the transaction. This causes the step-by-step execution of one or more SAP transactions depending on the complexity of the test script. After each transaction is completed, the test script makes a request of the validation engine, asking for the values necessary to validate the results of the transaction just completed. These values are returned to the GUI test tool, compared with the expected results, and a pass or fail value is assigned to that test step. (See Exhibit B.2.) A validation engine can also increase the efficiency of testing inand outbound interfaces to SAP. Interface testing can be done through a GUI test tool but it requires the skills of a test script writer, a Visual Basic programmer, and an ABAP program in order to generate the code to access the internals of SAP through a test tool. Using a validation engine simplifies this process greatly by eliminating the need for an advanced business application programming (ABAP) programmer as well as a Visual Basic programmer, and once a test component is in place it can be reused in other end-to-end test scenarios with little technical expertise. Validation engines greatly simplify the System Under Test

SAP GUI

Execution Engine

Core Business Processes I SAP ERP

Test Scripts

GUI Test Tool

Database

Effecta™ Validation Engine

n t e r f a c e s

EXHIBIT B.2 Validation of Processes through the GUI and Back End with Validation Engine

17_AppB_4782

362

2/5/07

11:32 AM

Page 362

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

effort required to build end-to-end test scenarios that cross over multiple platforms. Through the use of validation engines for next-generation SAP test accelerators, building lights-out test automation is a much simpler proposition and realistically attainable through the framework and structure provided in a next-generation accelerator. By enabling this efficiency, the real ROI begins to appear to test automation naysayers and the benefit of faster cycle times, deeper testing, and lower costs can be realized.

MAINTENANCE AND FEATURES OF ACCELERATORS As previously mentioned, with the use of next-generation test accelerators it is possible to think of the automated test case for a particular transaction as a test module or test component. That single component is then used by larger automated test cases for end-to-end processes that consist of multiple SAP transactions so that maintenance or changes to that individual component are propagated across all automated test cases that use that component. This simplifies the change process and reduces the labor associated with keeping test assets current. First-generation test accelerators solved the challenge of constant change with the use of this component architecture but in so doing introduced a number of other challenges that were subsequently addressed by next-generation test accelerators. The challenge with building test scripts from components is the management and coordination of these test components across a group of test developers. Like any development process without a system for tracking, versioning, and distributing these test assets, the overall system effectiveness is greatly inhibited. Without a centralized tool or method for managing this resource, an organization can get into trouble very quickly. One method for controlling these assets is to treat them as you would any software asset and use a source code control program to manage the distribution and control of these assets. Next-generation test accelerators are implementing central access to these assets through the authoring environment while still using a source code control program for overall management. This provides easy access, use, and reuse to the test developers, at the same time providing all of

17_AppB_4782

2/5/07

11:32 AM

Page 363

Case Study: Accelerating SAP Testing

363

the benefits of an asset that is governed by the rules in a source code control program. The source code control program provides a repository for the test component library as well as a copy of the database that contains the configuration data of the metadata, which are the elements and objects that make up the automated test cases. The overall impact is a disciplined test environment with the ability to retrace past tests that assures consistent testing and manages the ongoing evolution of your test assets in a controlled environment. With the management and control of these test assets defined, let us look at how changes are made. When working with a nextgeneration test accelerator most process changes are taken in stride, as they no longer require the intensive coding changes. Working with metadata through a forms selection process simplifies the testing process greatly. However, there are times when the change to your SAP system is not based on standard SAP transactions, but is based on a modification that your organization has made to SAP or perhaps interfaces to another application altogether. In this case your test accelerator needs to be able to handle custom Z-transactions or inbound or outbound interfaces. If your test accelerator is designed properly, it is possible to build custom components to address the specific needs of these custom objects. It is very common to run into many custom objects that are part of core, critical business processes, so you should assume you will need to deal with these. In SAP, Z-transactions exist because the necessary functionality in a given SAP transaction did not exactly meet the needs of the user; therefore it is unrealistic to think that a test accelerator will have the necessary functionality to exactly test that Z-transaction. Similarly, an interface to another system will not be included in any standard test accelerator, so a custom test component will be necessary. To implement effective test automation in these environments, a test accelerator provider must either provide effective training in developing test components or provide an impeccable service to build them for you, or more likely both. If for no other reason than to have the freedom of choice, a vendor must supply a developer’s course on the construction of test components. This course should provide enough detail of the internals of the component so that it can be included in the library of other test components and operate seamlessly within that environment. Additionally, the accelerator vendor should be capable of providing a service to build these components for you,

17_AppB_4782

2/5/07

364

11:32 AM

Page 364

TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE

should you choose to outsource their creation. Eventually, we will see third-party service providers offering test component development. Next-generation test accelerators that store execution, validation, and metadata in a relational database are a dramatic improvement over managing test data in spreadsheets. By centralizing data the test platform simplifies test creation, manages access control, simplifies backup, recovery and revision control, and enables simplified audit and compliance with government regulatory requirements under Sarbanes-Oxley, the Federal Drug Administration (FDA), and others. Initial SAP test accelerators have been a promise for a number of years but their value is hampered on account of the following five major challenges: 1. Test script creation slowed the adoption as many were not ready to follow the paradigm of the test tool provider over SAP’s business process model. 2. Most test organizations were in the dark about lights-out testing, so they were unable to see the ROI on test automation, even with accelerators. 3. Maintainability of test scripts was a significant hindrance for anyone who was writing test scripts manually and required an army of staff to keep test assets current. 4. Data management was unsecured and this only exacerbated the maintainability of test assets. 5. Finally, the cost model for building test assets was based on a custom programming model instead of a build-once, distributeeverywhere model that distributes the cost across the entire SAP community. Times have changed and forward-thinking test automation companies have solved these five key problems, so if one has been skeptical of SAP test accelerators in the past it is a good time to take a closer look at the state of the industry.

18_index_4782

2/5/07

3:15 PM

Page 365

Index

A ABAP. See Advanced business application programming (ABAP) Action-based testing, 342. See also Keyword-driven automation approach Ad hoc system changes, 300, 309 Ad hoc tests, 72, 166 Advanced business application programming (ABAP), 2, 15, 16, 20, 21, 23, 52, 63, 167, 192, 200, 204, 227, 231, 245, 255, 263, 301 Alexander, Christopher, 12 Altova UModel, 47 Application under test (AUT), 11 Approval (sign-offs) automated test cases, 91 production changes, 309, 310 requirements, 44 test case, 233, 239 test plans, 176 test results, 3 test strategies, 176 ARIS, 47 Arsin Corporation, test tool evaluation form, 101–106 ASAP Roadmap Methodology, 2, 7, 28 accelerator for BPP template, 236 and early testing, 9 feasibility check, 45

integration test, 59 requirements, developing, 45 test case templates, 183–185, 232, 233 test strategy, 174 testing activities, 269 workshops, conducting, 42 Ascendant, 2, 28 and business scenarios, 41 experience with and cost estimates, 67 prioritizing requirements, 52, 53 test case templates, 232, 233 test plan sample, 176 test strategy template, 174, 175 use of in developing requ irements, 39 Audits, 3, 4, 170, 171, 288, 305, 317 Automated testing capacity testing. See Capacity testing criteria for business process test automation, 163, 164 failure signs, 167, 169, 170 functional testing. See Functional testing and number of resources needed, 200 processes, 32 production-based SAP system. See Production-based SAP system regression testing. See Regression testing

365

18_index_4782

2/5/07

3:15 PM

Page 366

366 Automated testing (cont.) sources of automation, 160–161 test results, 287, 288 tools. See Test tools types of tests suitable for, 161–166 AutoSys, 15 Autotester, Inc., test tool evaluation form, 107–114 AutoTester ONE Special Edition for SAP, evaluation form, 107–114

B Basis team and capacity testing, 258 and performance testing, 17 as source of individual resources, 226 and system changes, 310, 312 test readiness review, 178 Basis team leader approval of test plan and test strategy, 176 as member of change control board, 55 Batch Data Communication (BDC), 227 Beta testing, 211 Black box testing, 1, 7, 15 BMC Patrol, 253 Bolt-ons, 16, 17, 160, 200, 251, 311 Boundary testing, 4 Budget, 4, 68, 171. See also Costs Build-operate-transfer (BOT), 327, 332 Build versus buy analysis, 81, 82 Business analysts (BAs), 5, 40–42, 61, 63, 64, 71, 86, 169, 231, 233, 238

Index

Business process master list (BPML), 49, 160, 161 Business process procedures (BPPs), 21, 161 authoring tools, 221 and production changes, 310, 313 and quality assurance, 173, 197 as source of information for test case, 231, 233, 236, 237 template, 183, 236 Business processes changes to and maintenance of automated test components, 84 criteria for test automation, 163, 164 diagrams, software for designing, 47 gathering and analyzing, 87 and structure of functional teams, 24 and test automation, 91 Business rules, 87 Business scenarios, 41 Business Warehouse (BW), 16, 231, 251

C Calendar, test execution, 274–276 Caliber-RM, 50 Capability Maturity Model (CMM), 3, 26, 173, 272, 286, 325 Capacity testing analysis, 264–266 automated, 253–264 execution, 259–264 importance of, 243–244 manual, 253–255, 260 monitoring, 260, 261 need for, 243, 244

18_index_4782

2/5/07

3:15 PM

Page 367

Index

planning, 244–253 and production-based systems, 266 Roadmap templates, 244 test design and construction, 253–259 trial runs, 258, 259 triggers for, 244, 245 types of, 243, 245, 246 Cascading effects of system changes, 299, 302, 304, 316 CATT, 262, 277 CCMS, 253 Certification processes, 58 Certify, test tool evaluation form, 131–139 Change control board (CCB), 29 and capacity testing, 265 defect management, 285 defect resolution, 290, 292 and help desk system requests, 49. See also Help desk members of, 55 production changes, 316 requirements management, 38, 50 role and responsibilities of, 55, 56 and system changes, 18 waivers, evaluation of impact, 60 Change management team, 17, 29–33, 310 Checklist, test readiness review, 31, 32, 179–182 Class library framework, 79–82, 87, 88 Code-free automation approach, 76, 82–84, 87, 88 Coding practices, 88 Commitment from management, 72, 73 Computer Aided Testing Tool (CATT), 262, 277

367 Compuware Corporation, 304 test tool evaluation form, 115–121 Configuration changes, 49, 75, 83, 84, 162 and maintenance of test components, 84 Configuration team and capacity testing, 258 leader approval of test plan and test strategy, 176 and performance testing, 17 and scenario testing, 16 as source of individual resources, 225, 226 structure of, 24 and system changes, 310, 313, 314 test readiness review, 178 unit testing, 14 and user acceptance testing, 17 Conflicts of interest and independent testing, 57 Consistency, requirements, 51 Consultants, 5, 6, 61, 64, 231, 241 Continuous process improvement lessons learned, documentation of. See Lessons learned and outsourcing, 327 and tester evaluation, 205 testing, 22 Control-M, 15 Corrective actions, 26 Costs automation, 73 estimating, 4, 61–68 licensing fees, 82 and outsourcing, 24 Customer input (CI) templates, 39–49, 53, 236 Customer Relationship Management (CRM), 251

18_index_4782

2/5/07

3:15 PM

Page 368

368 Customization, 31, 91, 160, 163, 203

D Data changes to, 84, 85 defects, 293 dictionary, 232, 233, 236 and functional test automation, 72, 74–76, 82–85, 87 historical data, 62, 63, 68, 276 loading, 202, 223 master data, 40, 232 migration testing, 18, 237 test data, collection of, 290 and test dependencies, 277, 288 values, 183, 304 Data-driven approach to automation, 72, 76, 77, 342. See also Keyword-driven automation approach Database administrator (DBA), 247, 260 Database team, 17, 229 Databases defects, 285, 288, 398 and frameworks, 78, 79 Microsoft Access, 222 test data, 85 Defects, 3 aging, 280, 281 density, 282, 283 fix retest, 281 and implementation partners, 59 management, 285, 288, 298 newly opened, 281, 282 prevention, 11 reporting, 285, 290–298 and role of quality assurance, 174. See also Quality assurance (QA) standards severity levels, 292, 294, 295

Index

and test engineer self-evaluation, 214–217 and test management tools, 170, 171 trend analysis, 281, 282 Deloitte Consulting, 28 Department of Defense (DoD), 28, 29, 287 Destructive testing, 72 Development objects, 49. See also Report, interface, conversion, enhancement, work flow and form (RICEWF) objects Development team and capacity testing, 258 and development testing, 15, 16, 227, 228 and scenario testing, 16 structure of, 24 and system changes, 310, 312 test readiness review, 178 Development team leader approval of test plan and test strategy, 176 as member of change control board, 55 Development testing, 15, 16, 227, 228 Diagramming. See also Unified Modeling Language (UML) flow processes. See Flow process diagrams processes and requirements, 3 Documentation, 3 approvals, 311 automation, 71, 72, 91 capacity testing, 255, 256 inadequate, 169 lessons learned, 21, 23, 26–28, 31, 173, 198, 265 need for, 285, 286 and outsourcing, 88 requirements, 9, 11, 44

18_index_4782

2/5/07

3:15 PM

Page 369

369

Index

retention of, 4 and system changes, 313 test results, 285–287 DOORS, 50 Dustin, Elfriede, 50

E Early testing, importance of, 9–13 Early Watch, 253 eCATT, 92, 221 Eighty/twenty rule (Pareto’s principle), 247, 251 End users. See also User acceptance test (UAT) and developing requirements, 38, 45–47 and functional requirements, 37 hands-on testing, 3 help desk tickets. See Help desk and integration testing, 17 and performance testing, 162, 163 questionnaires, 48 as source of individual resources, 225, 226 surveys, 48 and system changes, 310, 312 and test cases, 238, 239 training. See Training as workshop participants, 42 Entrance criteria, 3, 58, 176, 177 Estimates automation timeline, 86–88 costs, 62, 63, 65–67 and test execution calendar, 274 test schedule, 268, 269 Evolutionary model, 28 Exit criteria, 3, 58, 59, 176, 177 Expected test results, 183 Expert judgment model, 67, 68, 274 Extended Computer Aided Test Tool (eCATT), 92, 221

F Feasibility check, 45 Flow process diagrams, 4, 32, 222, 231, 233, 237, 313 Frameworks approach, 76–82, 86–88 Functional requirements documenting, 13 and managing requirements, 49 and prioritization, 52 and requirements traceability matrix, 54. See also Requirements traceability matrix (RTM) as source of information for test case, 231 and test cases, 161 Functional specifications, 236, 237, 313 Functional team, 24, 55, 310, 312 Functional testing, 3 approaches, 76–83 business case for, 69–71 documentation, 71, 72 management, 85–87 negative testing, 72 outsourcing scripting, 87–89 pitfalls, 74–76 positive testing, 72 for regression testing, 71 success factors, 72–74 test library maintenance, 83–85 testers, evaluating, 209, 210 “to-be” processes, 72 when to automate, 71, 72

G Gap analysis, 38, 39, 42, 43, 45, 47–48, 193, 240, 301 Good manufacturing practices (GMPs), 65

18_index_4782

2/5/07

3:15 PM

Page 370

370 Graphical user interface (GUI), 4, 49, 79–83, 311

H Hands-on testing, 3 Hardware resources, 224, 225 Help desk and automated test tools, 224 and cost estimates, 63 and emergency changes, 300 and production changes, 47, 310 reduction of complaints as objective of SAP, 243 and SAP implementation, 5, 6 and scope creep, 49, 55 as source of requirements, 37–39, 42, 47–49 and system changes, 227, 301, 302, 310 system defects, 20 and traceability of requirements, 53 Historical information model, 67, 68, 274 IBM, 28 Ascendant. See Ascendant test tool evaluation form, 150–159

I IDS Scheer, 47 IEEE, 3, 7, 28, 56, 57, 173 Implementation methodologies, 2, 5–8, 28, 29, 62, 63, 65–67, 162 Independent verification and validation (IV&V), 56, 57. See also Verification Industry regulations and CI templates, 43, 44 requirements, 37

Index

as source of requirements, 38 Infrastructure team, 17 Institute of Electrical and Electronics Engineers (IEEE), 3, 7, 28, 56, 57, 173 Integrated Relationship team, 325–327 Integration manager, 55, 176 Integration team, 18, 310, 312 Integration testing, 16, 17, 160–162, 178–185, 201, 228, 301 Intellicorp, 47 Intermediate documents (IDOCs), 251, 255, 262 Interviews, 45–47 iTKO Inc., test tool evaluation form, 140–149

K Key or action word framework, 79, 86, 88 Keyword-driven automation approach, 340–346

L Legacy systems, 228 and capacity testing, 247 and challenges in SAP testing, 5 and data migration, 23 and data verification, 275 and development testing, 15, 16, 228 documentation, 39 and quality assurance, 175 as source of requirements, 38, 42, 45 and test cases, 21, 232, 237, 238 and test team members, 199, 204 Lessons learned capacity testing, 265

18_index_4782

2/5/07

3:15 PM

Page 371

371

Index

and changes to testing, 31 and cost estimates, 61, 63, 66 need for capturing, 19 from outsourcing, 331, 332 and peer reviews, 239 repository for, 28 reviewing and documenting, 21, 23, 26–28, 31, 173, 198 LiveModel, 47 Load testing, 162, 243, 245, 247, 255–259, 261, 263, 264. See also Capacity testing Loadrunner, 253 Logs, 283, 287. See also Test results Luminate, 253

M Maintenance automated test components, 84, 85 test cases, 240, 241 Manual keystrokes, capturing, 91 Manual testing, 3, 202 ad hoc tests, 72 capacity testing, 253–255 destructive testing, 72 production-based SAP system, 299, 304–306 random testing, 72 and signs of automation failure, 169, 170 and system changes, 298 test results, 287, 288 Mercury Deployment Management Extension for SAP Solutions, 309 Mercury Interactive, 50, 253, 309 Metrics test case planning, 239, 240 test execution, 278–283

N Naming conventions, 75, 76, 85, 88 Narratives, 4, 32, 47, 222. See also Unified Modeling Language (UML) Negative testing, 4, 14, 15, 72 Nonfunctional requirements, 13 Nonfunctional testing, 210, 211

O Offshoring, 319, 325 Origins of SAP, 1, 2 Orthogonal arrays testing systems (OATS), 4, 306, 333–340 OSS (On-line Service System), 18, 49, 63, 66, 162, 190, 203, 227, 267, 299, 300, 302, 311 Outsourcing benefits of, 319–321 build-operate-transfer (BOT) model, 327, 332 costs, 24 defined, 319 deliverables-based project, 323 documentation, 88 factors to consider, 321, 322 and Integrated Relationship team, 325–327 lessons learned, 331, 332 managed service, 323, 324 managed staffing, 324 offshore, 319, 325–327 offsite, 325–327 onsite, 325 payment terms, 329 scripting, 87–89 as source of individual resources, 225 staff augmentation, 324, 328 terms of testing service, 327–331

18_index_4782

2/5/07

3:15 PM

Page 372

372

Index

Outsourcing (cont.) test automation, 91 test teams, 22, 24 and testing system changes, 313, 314

Project manager, 55, 176, 178, 258 Prototypes and demonstrations, 39, 45, 47, 53, 55, 58, 63, 166

P

Quality assurance (QA) standards, 3, 4 applicability of, 186, 187 and cost estimates, 61 limitations of, 186, 187 quality defined, 35 and quality management (QM) module, 173 quality measures, 12 test case template, 183–186 test cases, 240 test criteria, 176–178 test plan and strategy, 174–176 test readiness review, 178–183 Quality assurance (QA) team composition of team, 201, 202 and cost estimates, 61, 65 and diversion from primary job responsibilities, 187, 188 evaluating testers, 205–214 integrated with test team, 191 number of resources needed, 200–201 project preparation phase, 190 responsibilities and skills sets required, 197, 198 role and responsibilities, 173, 174 skills, 188, 191–199 and test cases, 233 test team differences, 188–190 when to add to project, 190, 191 Quality Center (TestDirector), 50 Quality management (QM) module, 173 Questionnaires capacity test planning, 247–250 end users, 45–48

Pareto’s principle (80/20 rule), 247, 251 Pass/fail criteria, 178 Patches, 49, 60, 66, 73, 83, 84, 162, 190, 203, 223, 267, 289, 300, 302 Peer reviews, 3, 31, 44, 233, 238, 239 Performance testing, 17, 160–162, 201, 228, 329, 301, 302 Pilot project, 73, 74 Positive testing, 4, 72 Prioritization, requirements, 36, 51–53 Production-based SAP system approvals for changes, 309, 310 automated testing, 299, 305–309, 314–317 and capacity testing, 266 and cost estimates, 65–67 rainy-day scenarios, 306 requirements, sources of, 42, 47, 48 sunny-day scenarios, 305–309, 311 support for testing, 313, 314 system changes, 300–303 testing challenges, 302, 304, 305 types of tests, 310–312 Production support, 3 Production team, 225, 226 Project Management Institute (PMI), 26 Project management operations (PMO), 29, 56, 57, 178

Q

18_index_4782

2/5/07

3:15 PM

Page 373

Index

R Rainy-day scenarios, 306 Random testing, 72 Rational Functional Tester, test tool evaluation form, 150–159 Rational Requisite Pro, 50 Rational Rose, 47 Rational Unifying Process (RUP), 4 Record and play, 74–77, 85 Regression testing, 3, 4, 18, 32, 71, 160–162, 229, 233, 299, 301, 302, 304, 310, 311, 314 Regulatory compliance, 57, 285, 287, 309, 314 Relational databases, 79 Releases previous release as source of requirements, 37, 38, 47, 48 and system changes, 300, 301 testing criteria, 177, 213 Remote function calls (RFCs), 41 Repetitive nontesting tasks, 167 Report, interface, conversion, and enhancement (RICE) objects, 301, 313 Report, interface, conversion, enhancement, work flow and form (RICEWF) objects, 15, 16, 37, 49, 237, 271 Reports, 200, 201, 285. See also Test reporting Repositories and cost estimates, 67 database, 86 documentation of business processes, 47 lessons learned, 28 requirements, 47, 50, 54, 55, 222 and test management tools, 288, 298

373 test plan and strategy, 176 tests, 170, 171, 176, 241, 283, 308 and tracking approvals and changes, 309 vendors, 50 Request for proposal (RFP), 331 Requirements, 4 ambiguous, 51, 53, 251 approval process, 44 and defect prevention, 11. See also Defects defined, 36 development objects, 37, 49 documentation, 9, 11, 44 drafting, 38, 39 early testing, 9, 11 evaluating, 50–53 examples of well-written and poorly-written, 251–253 failure to meet, 7 feasibility check, 45 functional. See Functional requirements inspection, 44 linking, 50 management tools, 38, 49, 50, 170, 171, 221, 222 methods for gathering, 39–49 peer review, 44 performance, 49 prioritizing, 36, 51–53 and quality, 12, 13, 35, 36 repositories, 47, 50, 54, 55, 222 security, 37, 49 as source of information for test case, 236, 237 sources of, 37, 38 and system changes, 313 system performance, 37 terminology, 37, 38 and test case, 232 testing, 11, 12

18_index_4782

2/5/07

3:15 PM

Page 374

374 Requirements, (cont.) traceability matrix. See Requirements traceability matrix (RTM) types of, 37 UML, use of. See Unified Modeling Language (UML) usability, 49 user interviews, 45–47 verification, 12, 13, 56–60 work flow, 49 workshops, use of, 38, 42–45 Requirements-based testing, 31, 35 Requirements traceability matrix (RTM), 3, 4, 35 construction of, 58 developing, 53, 54 inadequate, 6, 7 quality assurance team, 190 and requirement management tools, 221, 222 and requirements-based testing, 31 and test cases, 237 test team, 190 and verification of requirements, 58 Requisite Pro, 50 Resources, 219, 220 environment, 225 hardware, 224, 225 individual, 225–229 quality assurance (QA) team, 200, 201 software, 222–224 test lab, 220, 221 test team, 189, 190, 200, 201, 204, 205 Resumption criteria, 178 Return on investment (ROI) and test case automation, 164–166

Index

test tools, 7, 8, 166, 167, 169 Reverse engineering, 11 RICEWF. See Report, interface, conversion, enhancement, work flow and form (RICEWF) objects Roadmap Methodology. See ASAP Roadmap Methodology RTM. See Requirements traceability matrix (RTM)

S SAP Assessor Tool, 304 SAP modules, 24, 173, 300, 302, 306, 308, 311 SAP objects, transporting, 222, 309 Sarbanes-Oxley (SOX), 4, 52, 57, 65, 286, 309 Scenario testing, 16, 161, 162, 164, 165, 177, 201, 227 Schedule, 4, 62, 65. See also Test schedule Scope and purpose of book, 2–4 Scope creep, 49, 55 Scope of testing, 4, 6, 7, 65–67 Scope statement, 45, 46 Screen/window framework, 79–81, 87, 88 Screenshots, 186, 225, 262, 283, 288, 292, 305, 310, 314, 316, 317 Scripts capacity testing, 255, 256, 258, 262 CATT, 277 and cost estimates, 63 documentation, 267 eCATT, 221 and functional testing, 71, 75–79, 84–89 outsourcing, 87–89, 330

18_index_4782

2/5/07

3:15 PM

Page 375

Index

script coding, 86, 87 test script, 231, 240, 284, 299, 302, 306, 308, 309, 311, 313–317 and test tools, 75, 91, 92, 163, 164, 199, 221, 272 and testers, 208, 213 Security testing, 14, 175, 301 SEI. See Software Engineering Institute (SEI) Serena-RTM, 50, 58 Service-level agreements (SLAs), 4 and capacity testing, 259, 263, 265, 266 and outsourcing, 323, 324 and performance testing, 17 and requirements, 253 and system changes, 311 Site surveys as source of requirements, 38 SiteScope, 253 Six Sigma, 173, 176 Smart Draw, 47 Smoke testing, 7, 161, 301 Software development and early planning, 268 life cycle, 9, 267, 328 outsourcing, 319 and system quality, 35 testers, 12 Software Engineering Institute (SEI), 4, 7, 28, 286 Capability Maturity Model. See Capability Maturity Model (CMM) Software resources, 221–224 Solution Manager, 28, 42, 47. See also ASAP Roadmap Methodology automating sunny-day scenarios, 306–308 CI templates, 39–49, 53, 236

375 stress and volume tests, templates for planning, 175, 247 use of in developing requirements, 39 white paper for documenting test strategy, 174, 175 Spreadsheets capacity test design, 253 and frameworks, 78–80 test case templates, 183, 186, 231, 233 test data, 85 test results, storing, 283, 288 Standards coding, 85 independent testing, 56–60 naming conventions, 75, 76, 85, 88 quality assurance. See Quality assurance (QA) standards test case conventions, 74–76 Statistical process control (SPC), 247 Stress testing, 162, 175, 220, 247 String tests, 16, 160, 201, 219, 301 Structured Query Language (SQL), 79, 192, 263 Subject matter experts (SMEs), 5 and automated testing approaches, 87 and capacity testing, 247, 258 and estimates, 61, 64 and exit criteria, 177 and functional test automation, 71, 86 and integration testing, 17, 228 as members of change control board, 55 as members of test team, 198, 199, 206

18_index_4782

2/5/07

3:15 PM

Page 376

376 Subject matter experts (SMEs) (cont.) and peer reviews of test cases, 238 requirements, gathering information for, 39 and scenario testing, 16, 227 and signs of test automation failure, 167, 169 as source of individual resources, 225, 226, 320 and technical experts, 199, 208, 209 test case review, 238 user acceptance testing, 58 as workshop participants, 42 Success testing criteria, 178 Sucid Corporation, test tool evaluation form, 122–130 Sunny-day scenarios, 306, 308, 309, 311 Supplier Relationship Management (SRM), 251, 311. See also Bolt-ons Suspension criteria, 3, 177 System architect, 61, 64 System changes. See also Production-based SAP system documentation, 313 emergency (ad hoc), 300, 301, 310 enhancements, 49, 66 impact of, assessing, 316, 317 and maintaining test cases, 240, 241 outputs, 313, 314 patches. See Patches planned, 300, 301 and regression testing, 18 testing activities, 64 upgrades. See Upgrades System modules, addition of and need for new requirements, 38

Index

T Table-driven testing, 342. See also Keyword-driven automation approach Taguchi, Genichi, 306, 334 Technical expertise, 198, 199, 208, 209 Technical specifications, 231, 236, 237 Technical testing, 18, 229 Templates Ascendant, 174, 175, 232, 233 business process procedures, 183, 236 capacity testing, 244 customer input (CI), 39–49, 236 and documenting lessons learned, 27 evaluation matrix template, 93–100 and outsourcing, 24 and quality assurance, 173, 186 Solution Manager, 175 stress test planning, 175, 249 test case, 174, 175, 183–186, 231–237, 254 test strategy, 174 testing SAP, 23, 29, 30 Test accelerators, 367–376 Test analysts, 86 Test approach changes, managing, 29–33 implementation methodologies. See Implementation methodologies project components, 1 review of existing practices, 19–22 software methodologies, 28, 29 Test cases, 3 automated, 4, 160, 161, 167, 232

18_index_4782

2/5/07

3:15 PM

Page 377

Index

building, 232, 233 characteristics of well-written, 232, 233 customized template, 234, 235 data dictionary example, 236 design of, 231 execution of. See Test execution maintaining, 240–241 methods for automating, 168 metrics, 239, 240 and number of resources needed, 200, 201 and orthogonal arrays (OATS), 4, 306, 333–340 peer review, 238, 239 production-based changes, 314, 315 reuse of, 232 sources of information for, 231, 233, 236–238 templates, 174, 175, 183–186, 231–237, 254 test scenarios, 308 and test tools, 161–166 and use of implementation partners, 58, 59 Test criteria, 176–178 Test Data Migration Server, 223 Test design and automated testing, 78, 86, 89 and number of resources needed, 200, 201 and test management tools, 170, 171 tools for, 91 Test engineers and automated testing, 78, 82, 86, 163 and benefits of code-free automation, 83 evaluating, 205, 207, 211, 212 responsibilities and skills required, 194–196

377 and script coding, 87–89 self-evaluation, 214–217 and signs of test automation failure, 167, 169 skill level, impact of on test execution schedule, 272 Test environment, 73, 74, 196, 225 Test execution, 233, 267, 268 automated, 267 calendar, 274–276 capacity testing, 259–264 logs and results, 283 manual, 267 metrics, 278–283 and number of resources needed, 200, 201 purpose of, 267 test dependencies, 277, 278 and test management tools, 170, 171 test schedule, 267–274, 278–283 tools for, 91 Test harness, 199, 208, 350–354 Test labs, 220, 221 Test lead, 193, 194 Test libraries, 71, 73, 76, 83–87, 89, 204, 299, 306, 309, 316 Test management tools, 3, 91, 92, 170, 171, 308 test case templates, 231, 233 test data collection, 290 test results, storing, 283 and testing metrics, 278 Test manager lessons learned, documenting, 27 and managing changes, 29–33 and peer review of test cases, 239 responsibilities and skills required, 192, 193 and test strategy, 174 tester evaluation, 205–214

18_index_4782

2/5/07

3:15 PM

Page 378

378 Test plan, 1, 4, 29–33, 91, 170, 171, 174–176, 200, 201 Test program tasks, 269–271 Test readiness review (TRR), 20, 31, 32, 178–183, 233, 277 Test reporting, 170, 171, 200, 201, 285 Test repository, 170, 171, 176, 241, 283, 308 Test results, 3, 4 documentation, 285–287 screenshots, 289, 290, 292 storing, 283, 287, 290 Test schedule, 267, 274, 278–283 Test scripts. See Scripts Test strategy, 4, 9, 29–33, 190 Test team borrowed resources, 204, 205, 225, 226 centralized, 22, 23 composition of, 202–205 and cost estimates, 61, 64, 66 decentralized, 22–24 formation of, 3 integrated with QA team, 191 and integration testing, 17 manager of as member of change control board, 55 number of resources needed, 200, 201 outsourced, 22, 24 and performance testing, 17 permanent team, 202–204 project preparation phase, 190 and quality assurance, 174, 188–190 and regression test, 18 resources, 187, 188 and scenario testing, 16 skill sets, 191–198 structure, 22–26 system changes, 310, 312, 314, 317

Index

test lab responsibility, 220, 221 test readiness review, 178 and user acceptance testing, 17 when to add to project, 190, 191 Test tools Arsin Corporation, 101–106 automation, 3, 4, 16, 17 automation failure signs, 167, 169, 170 Autotester, Inc., 107–114 benefits of, 166, 167 CATT, 262, 277 commercial vendors, 92 Compuware Corporation, 115–121 eCATT, 92, 221 evaluation criteria, 160 evaluation matrix template, 93–100 IBM, 150–159 iTKO Inc., 140–149 methods of automation, 167, 168 production-based SAP system, 306 readiness for, 91 return on investment, 7, 8 role of, 92 software, 221 Sucid Corporation, 122–130 types of tests suitable for automation, 161–166 use of, 91, 92 vendor survey, 92 Worksoft, Inc., 131–139 Testers. See also Test team early involvement, need for, 11, 12 evaluating, 205–214 expectations, 207, 208 lack of skills and knowledge, 5, 6

18_index_4782

2/5/07

3:15 PM

Page 379

379

Index

Testing committee, 29, 30 Testing practices, basic principles, 2, 3 TestPartner, evaluation form, 115–121 Text editors capacity test design, 253 and frameworks, 79 test case templates, 183, 186, 231, 233 test data, 85 test results, storing, 288 “The system shall” statements, 11 Third-party organizations documenting lessons learned, 27 independent verification of requirements, 56–58 test case review, 238 third-party verification, 3 ThreadManager, 28 “To-be” processes, 72, 87 Total quality management (TQM), 35, 176 Touch points, 41, 49, 51, 72, 193, 304, 306, 309, 310 Traceability, requirements, 51, 53. See also Requirements traceability matrix (RTM) Training, 8, 72–74, 91, 310, 313 Transaction codes, 160, 161, 278, 304, 308 Transporting objects, 20, 26, 32, 222, 309 TRR. See Test readiness review (TRR)

U UML. See Unified Modeling Language (UML) Unified Modeling Language (UML), 3, 11, 32, 47, 222, 237

Unit testing, 3, 14, 15, 227, 301 Upgrades, 20, 27, 32, 49, 56, 201, 223, 300, 302, 311 Usability testing, 18, 37, 346–350 Use case, 11, 13, 47, 48 User acceptance test (UAT), 17, 53, 228, 229. See also End users Department of Defense requirements, 29 resources, 219 and role of change control board, 55 and test cases, 233, 238, 239 test lab, use of, 220 and test strategies, 175 verifications, 58, 59

V V-shaped model, 9 Validation of system design, 3 Verification independent verification and validation (IV&V), 56, 57 of objects, 167 and outsourcing, 320, 326 points, 161 requirements, 12, 13, 56–60 service-level agreements, 263 of system design, 3 Versions control, 85, 88, 173, 176, 197, 222, 223, 241 and test management tools, 170, 171, 241 Visio, 222 Volume testing, 162, 247

W Waivers, 58–60 Waterfall model, 9, 28

18_index_4782

2/5/07

3:15 PM

Page 380

380 Weigers, Karl, 38, 316 White box testing, 15 Work Breakdown Structure (WBS), 269–271

Index

Workshops, 38, 42–45 Worksoft, Inc., test tool evaluation form, 131–139 Workstations, 224, 225

SAP Testing SAP R3 A Manager's Step-by-Step Guide 2007.pdf ...

Retrying... SAP Testing SAP R3 A Manager's Step-by-Step Guide 2007.pdf. SAP Testing SAP R3 A Manager's Step-by-Step Guide 2007.pdf. Open. Extract.

4MB Sizes 7 Downloads 182 Views

Recommend Documents

SAP Implementation Strategies for SAP R3 in a Multinational ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. SAP ...

sap r3 handbook pdf
sap r3 handbook pdf. sap r3 handbook pdf. Open. Extract. Open with. Sign In. Main menu. Displaying sap r3 handbook pdf.

sap r3 fico pdf
Whoops! There was a problem loading more pages. sap r3 fico pdf. sap r3 fico pdf. Open. Extract. Open with. Sign In. Main menu. Displaying sap r3 fico pdf.

SAP R3 Handbook - Second Edition.pdf
R/3 Basis Software................................................................................................................................36. Basic Architectural Concepts.

PDF Book SAP HANA Certification Guide (SAP PRESS ...
(SAP PRESS), Rudi de Louw ebook SAP HANA Certification Guide (SAP PRESS), Download Online SAP HANA Certification Guide (SAP PRESS) Book, ...

SAP HANA Security Guide (SAP PRESS)
The comprehensive guide to SAP HANA security, from authentication to auditing ... your system, and use security tracing to keep an eye on your roles and ...