Software Security for Federal Agencies Serving Health Care HP Enterprise Security Business Whitepaper

Table of contents Introduction/Abstract/Overview 3 The Long and Necessary Road to Health Care Information Technology Transformation

3

The Future of Health IT and the Role of Data Security

3

The Imperative of Software Security — What is at Stake?

4

XP: The Canary in the Coal Mine

6

Software Security for Federal Agencies Serving Health Care: A Perspective

6

So How Can We Improve?

7

What to do about Applications Built by Third Parties? (e.g., System Integrators)

10

Conclusion 11 Addendum A

11

About Fortify Software

12

About HP Enterprise Security

12

Introduction/Abstract/Overview This paper aims to explain why software security is fundamental and imperative to health IT transformation. With the recent passage of Health Care Reform and the Health Information Technology for Economic and Clinical Health (HITECH) acts, the nation has reason to be optimistic for significant positive change. The HITECH act of 2009 in particular, sets the stage for vastly improved sharing of electronic health information. If successfully realized, we can expect better patient care, lower administrative costs, and less fraud and waste. In our view the biggest impediment to success is poor data security within information systems that carry health information. More specifically, the software that runs systems is at considerable risk from attack and misuse. Without adequate data security of protected health information, patients will not trust electronic health systems to guard the privacy of their very sensitive information. Further, we must view our health information systems as “critical infrastructure” that serves the public. Like the electric grid, destruction of health systems by foreign agents, criminals or terrorists, could cause substantial damage and even death of many citizens. With the relatively immature state of current health IT infrastructure this may not be an immediate worry. But with the HITECH act and other initiatives to spur better use of health information technology, we can expect software to be at the center of any progress we make. Government health care providers especially have a special responsibility to demonstrate that health IT transformation is worth the investment. In short we argue that, without adequate software security, we are inviting serious danger and possible collapse of our hopes. We begin this narrative with the need for health IT transformation and recent government efforts to aid in this vital effort. Next we delve into an explanation of the role of data security in health care. We then assert that software vulnerabilities are the crux of our weaknesses. We also offer a perspective of mandated software acquisition and development requirements for Federal agencies serving health care. Finally, we suggest the notion of Software Security Assurance (SSA) is fundamental to improve the state of our software and to foster genuine health IT transformation. We conclude with a call to action for leaders responsible for health IT decisions.

The Long and Necessary Road to Health Care Information Technology Transformation Contrary to public opinion the business of health care is flourishing in the United States. More than 15% of our current GDP can be attributed to health care. By 2020, the Council of Economic Advisers (CEA) estimates this will reach 20% of our GDP. Millions of jobs and services are created because of health care and it clearly is a vital, thriving sector of our economy.

3

In its exhaustive analysis over several years, The Department of Health & Human Services’ Office of the National Coordinator (ONC) noted that compared to other advanced countries, we did not have capable electronic systems to manage this care efficiently. Our care ordering, monitoring and delivery processes were largely paper driven and this led to waste, fraud, abuse and inefficiency. Unlike other US industries such as manufacturing and financial services, which saw leaps in productivity from using advanced information technology in the 1990’s and 2000’s, health care IT has lagged considerably. The ONC concluded that many of these weaknesses in our health care delivery stem from the poor use of health information technology. So under the Obama administration, the pace of IT transformation vastly accelerated. In particular, the passage of The Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as part of the American Recovery and Reinvestment Act of 2009, is the new key instrument of positive change. Funds available through HITECH have been targeted to finally thrust our health care delivery systems into the 21st Century in an organized and thoughtful way. By allocating over $20 billion in subsidies, grants and other incentives to spur adoption of Electronic Health Records, the country is now set for a massive overhaul of health information technology. While one can debate the difficulties in pursuing this path, including security and privacy concerns, health IT transformation has obtained a massive and much needed boost.

The Future of Health IT and the Role of Data Security So what may we expect advanced information technology to accomplish for health care? Predicting an exact sequence of what will happen is folly. However, based on the successes of other industries, we might assume that the following changes will occur: • Most health providers will develop a centralized repository for patient health information to aid in clinical workflows, follow-up care, reporting, and billing. Health Information Exchanges (HIEs) to facilitate the cross-flow of patient information across the country and at different care providers will be built. Care will improve and errors will be reduced. • An increasing number of medical devices will have direct connections to central repositories, giving promise to automatic or semi-automatic delivery of care protocols. For example, we may imagine a bedside monitor noting a diabetic patient’s vital signs and then automatically dispensing the correct amount of insulin. • Telemedicine will take root. With centralized, shareable medical records, patients may seek care from anywhere in the world. Interpretation of test results, X-Rays, MRI Scans and other vital patient information will be possible from remote locations and will be commonplace.

• “Implants” such as cochlear devices or heart regulators will be monitored remotely or by the patient. The point to take away from the above scenarios is that we can expect information technology to become even more pervasive in clinical settings. There will always be clinical risk (e.g., risk from inadequate care protocols), but when clinical risk is now tied ever so tightly to information risk, our risks compound. As an example, assume that a doctor relies on care protocols delivered to him by a hospital decision-assist “system”. This system itself is fed the latest suggestions on care protocols by a publisher of medical journals. Further assume that an attacker finds a way to change some protocols by gaining illegitimate access to the hospital’s or publisher’s systems. The attacker changes the care protocol for a certain diagnosis from 10 Mg. of a certain medication to 1000 Mg., which would lead to death by poisoning. Consider another example: a technician for an MRI equipment maker performs remote diagnostics on an MRI machine hooked into a hospital network. The technician is unhappy at work and decides to go rogue. He plants a “Trojan Horse” into the hospital network through a software flaw, and obtains key user names and passwords through a back channel. The technician then uses these passwords to penetrate the hospital’s system from an untraceable or public location. The damage possibly sustained by the hospital is only limited by the technician’s level of malevolence. This merging of cyber risk with clinical risk then is a major hurdle we have to overcome. Inadequate data security not only creates a major dent in the delivery of medical care; it can be the very reason for medical errors, and result in a vast and systemic failure of a large portion of our economy. In the opinion of this author, the situation is dire and must be corrected. To IT professionals, a fundamental premise on which all of these possible IT advancements rest is data security. Security professionals define “security” as systems and humans respecting the Confidentiality, Integrity and Availability of the data that courses through the various health IT servers, applications, databases networks and devices.

• If database servers and applications keep crashing and are not always accessible, trust in the information delivery system breaks down. (Availability) Data security in health care settings is as vital, if not more, than our reliance on banking systems to manage our finances. Data security is a complex beast to manage and we must understand this. There are no magic bullets. Instead we must create layers of defenses at every link in the information chain.

The Imperative of Software Security — What is at Stake? For the contemporary CEO or CIO, digital risk is now as important as any other financial, operational, or strategic risk. Considerable brand damage, catastrophic infrastructure loss, and financial and identity theft can all occur if digital risk is not drastically reduced. At the core of this assumed risk is an almost complete reliance on network based security products to thwart hackers, viruses and worms from bringing down systems or conducting illegal transactions. Because network based defenses – firewalls, patch-management systems, and anti-virus software – are ubiquitous and generally kept up-to-date, many IT managers believe that their data is safe. Unfortunately, this view is fraught with peril. While network based security is still essential, the bulk of digital risk and the soft underbelly of our information infrastructures, does not lie within the network. Instead the risk lies hidden and obscured within the thousands of software applications that manage, store and serve up the data, and run our business processes. The maze-like applications so prevalent in large corporations, and written and supported by teams of specialists, are highly vulnerable. This is because of their sheer complexity, a requirement to connect to private and public networks, and a reliance on additional software provided by parties not in the application’s purview. It is clearly a daunting effort to keep enterprise applications secure. Consider the architectural diagram below of a typical and relatively “straightforward” web application. First note the large number of authentication points, as well as myriad possible leaks or points of illegal entry into the system.

• If data is consistently lost (i.e., no longer kept confidential from prying eyes), patients can no longer trust their caregivers? (Confidentiality) • If data is regularly corrupted inadvertently or on purpose, patients or care providers will not trust the contents of the data repository. If test readings can be changed, for example, death by poisoning can easily follow. (Integrity )

4

Figure 1: Sample Application - Potential Authentication Points

Web Application User Workstation

Application Server

Web Server Application User Authentication Request

Application User Authentication Request (Passed from Web Server)

Web Browser Web Server Authentication Request Web Server Certificate Verified by PKI

Web Server

Web Server Authentication Request

Web User CAC Verified by PKI

Application Server Certificate Verified by PKI

Application Server Authentication Request

Workstation User Authentication Request

Application Server

Application Administrator

Industry analysts say as much as 75% of attacks are at the application layer, not the network layer, and the most damaging targeted attacks will be on vulnerabilities in web applications and custom developed software. Since most security vulnerabilities stem from poor or badly written code, we have to be especially careful during the coding process. The complexity of contemporary applications, combined with the fact that many of them were developed at lightning speed with little regard for security, leaves us vulnerable. No one was using the Internet for commercial transactions in 1994. Yet by 2010, in a short space of 16 years, the Internet is now indispensable to our lives. If we dig a bit deeper, we realize that software, whether it is present in network routers, user applications, databases, or for controlling semiconductors is at the core of information technology.

Application User Authentication Request

Database Server Authentication Request

Administrator CAC Verified by PKI

Administrator Authentication Request

Database Server

Application Server Certificate Verified by PKI

Database User Authentication Request

Database Server Certificate Verified by PKI

Administrator CAC Verified by PKI

5

Web Server Certificate Verified by PKI

Mobile Code Identification Request

Mobile Code Certificate Verified by PKI

Workstation User

Custom Application Code to Authenticate Web User

DBA Authentication Request

Database Server

DBA CAC Verified by PKI

Database Administrator

The truly sad part is that we wrote much of this software on the basis of mutual trust and to “do no harm” to fellow users. But today’s world is full of thieves, criminals, terrorists and rogue states, many of whom seek to cause us catastrophic damage for nefarious gains. The bad guys are targeting weaknesses in software more than anything else in our information landscape. So before we transform our health IT, we must fully understand how to build our defenses and protect our data. To summarize: As of 2010 we have built an amazingly complex world using computer software. The statement may appear to be trite, but it is a fact. The modern world is built on software. For our global society to function in the 21st Century, this software must be kept secure. It is that simple.

Excerpts from a July 2002 memo by Bill Gates to Microsoft’s Customers: “Six months ago, I sent a call-to-action to Microsoft’s 50,000 employees, outlining what I believe is the highest priority for the company and for our industry over the next decade: building a Trustworthy Computing environment for customers that is as reliable as the electricity that powers our homes and businesses today…” “Earlier this year, the development work of more than 8,500 Microsoft engineers was put on hold while we conducted an intensive security analysis of millions of lines of Windows source code. Every Windows engineer and several thousand engineers in other parts of the company were also given special training in writing secure software. We estimated that the stand-down would take 30 days. It took nearly twice that long, and cost Microsoft more than $100 million. We’ve undertaken similar code reviews and security training for Microsoft Office and Visual Studio .NET, and will be doing so for other products as well…”

XP: The Canary in the Coal Mine In the late 1990’s Microsoft’s confounding security problems with early versions of its flagship XP operating system made it the target of severe criticism. As XP became the de-facto operating system by which to interact with the Internet, the attacks on it rose substantially. More critically, XP was relatively easy to hack. When Microsoft dug further into the issue, it realized that during the development of its operating system there had been little consideration of how the Internet would change the security model for computing. Gone were individual machines, or small clusters of client-server local networks. Now instead, every computer could be connected to another using the Internet. Microsoft, in its wisdom and to protect its Windows franchise, began to sweep house in the form of a long-term “Trustworthy Computing Initiative”. In short Microsoft realized that much of the problem could be traced back to poor coding practices. In Microsoft’s own words from its website in 2010, “Our focus on fundamentals is making the platform inherently safer. As part of this initiative Microsoft has trained its developers, testers, and program managers in how to develop more secure code, putting in place a process for developing secure code called the Security Development Lifecycle (SDL). Microsoft holds its engineering teams accountable for the security of the code they deliver.” Microsoft is not the only company to have taken such measures, but it was among the first. Many leading Independent Software Vendors (ISVs) and some large financial services have followed suit. But this notion of software security has not hit home in health care. In a random set of interviews of leading Electronic Health Record (EHR) vendors, this author found that very few used advanced security oriented development processes to identify and test for vulnerabilities in the code that they themselves developed. For vendors to create and sell such software in 2010, this borders on travesty. When a consumer purchases a set of tires with a new car, he expects that these tires have been thoroughly

tested using 21st Century technology. Indeed, tires go through numerous and rigorous safety checks. When we drive our cars we generally don’t really think about the sanctity of the four rubber wheels propelling us or to make us brake effectively. The conspicuous absence of security testing tools and processes in health care is a void that must be filled. When we build software for electronic information systems we are, in essence, building the wheels for our electronic information systems. We must use the best available technology to secure our systems, or else we invite danger and become unsure of the system’s capacity to protect our data and keep it away from prying eyes.

Software Security for Federal Agencies Serving Health Care: A Perspective Overview For federal agencies supporting the nation’s health care efforts, meeting compliance requirements for patient privacy and health information security is a complex challenge. The Health Information Portability and Accountability Act (HIPAA) and its extensions through the HITECH act, serve as the primary requirements. HIPAA, unfortunately, does not provide specific direction on how to develop or acquire software that is secure. However, there are other regulations that apply to all federal agencies (except the Department of Defense which has its own set of requirements). Among the most notable are standards and guidelines that are developed by the National Institute of Standards and Technology (NIST) for federal computer systems. These standards and guidelines are issued by NIST as Federal Information Processing Standards (FIPS) for use government-wide. NIST develops FIPS in security areas such as cryptography, authentication, security and interoperability. The minimum security requirements cover seventeen security-related areas with regard to protecting the confidentiality, integrity, and availability of federal information systems and the information processed, stored, and transmitted by those systems. The application of the security controls required by this standard are defined in NIST Special Publication 800-53.

6

In addition to the FIPS requirements the Federal Information Security Management Act (FISMA), passed in 2002, provides additional information security guidelines for federal agencies to follow. Specifically, FISMA emphasizes the need for each federal agency to develop, document, and implement an enterprisewide program to provide information security for the information and information systems that support the operations and assets of the agency including those provided or managed by another agency, contractor, or other source. FISMA directed the promulgation of federal standards for: (i) the security categorization of federal information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels; and (ii) minimum security requirements for information and information systems in each category. As a first step, NIST, under the FIPS-199 Standard, provides guidance on how agencies must classify their systems based on the importance of the information they store, process and transmit. FIPS Publication 199 defines three levels of potential impact on organizations or individuals should there be a breach of security (i.e., a loss of confidentiality, integrity, or availability). The application of these definitions must take place within the context of each organization and the overall national interest. Therefore each “information system” must be categorized as having LOW, MODERATE, or HIGH impact. The guidelines NIST provides are: • The potential impact is LOW if: −−The loss of confidentiality, integrity, or availability could be expected to have a limited adverse effect on organizational operations, organizational assets, or individuals. • The potential impact is MODERATE if: −−The loss of confidentiality, integrity, or availability could be expected to have a serious adverse effect on organizational operations, organizational assets, or individuals. • The potential impact is HIGH if: −−The loss of confidentiality, integrity, or availability could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals.

7

NIST then further goes on to amplify that: “A severe or catastrophic adverse effect means that, for example, the loss of confidentiality, integrity, or availability might: (i) cause a severe degradation in or loss of mission capability to an extent and duration that the organization is not able to perform one or more of its primary functions; (ii) result in major damage to organizational assets; (iii) result in major financial loss; or (iv) result in severe or catastrophic harm to individuals involving loss of life or serious life threatening injuries.” A sample listing of NIST recommended controls specific to software or system acquisition is provided as Addendum A. Department of Defense (DoD) Considerations For national security reasons the Department of Defense (DoD) has specific requirements for information security. These are significantly driven to thwart cyberattacks against military assets. In particular, the National Defense Authorization Act (NDAA) for 2011 directs the Secretary of Defense to develop a strategy to address software vulnerabilities for systems in development, during milestone approvals, testing, undergoing security certifications, and while running in an operational status. To provide detailed guidance for defense agencies, the Defense Information Systems Agency’s (DISA) has published Security Technical Implementation Guides (STIGs), or DISA-STIGs for short. The DISA-STIG for application security is particularly instructive and detailed. The STIG is a set of application configuration standards that promote the development, integration and updating of secure applications required under DoD policy. We believe that this should also be considered as a “standard” for non defense federal agencies serving health care.

So How Can We Improve? Security risks within applications are the direct result of how applications are designed, constructed, tested, deployed, and maintained. Security vulnerabilities may be introduced at all phases of software development, from business requirements, to design and coding, to quality assurance testing, and even during deployment. Thus assessing and mitigating security risk requires the analysis of the application’s potential or current behavior within the context of a typical software development lifecycle (SDLC), illustrated below. There are variants of this approach, but in general most software development consists of 7 stages.

Figure 2: Software Developement Lifecycle Software Development Lifecycle (SDLC) Initiate (1)

Define (2)

Design (3)

Develop (4)

Test (5)

Implement (6)

Operate (7)

Education & Guidance Alignment &

Standards & Compliance

Governance

Strategic Planning

Threat Modeling Requirements & Design

Security Requirements Defensive Design

Architecture Review Verification & Assessment

Code Review Security Testing Vulnerability Management

Deployment & Operations

Infrastructure Hardening Operational Enablement

Stages 1 and 2: Initiate and Design Understanding security requirements should be a mandatory exercise of the Initiate and Design stages when developing an application. A feature-rich but overly complex application may be very difficult to secure, while a simple application handling sensitive data may need high security. In this first stage it is common for security issues to take a back seat. Business owners are interested in making sure that developers build applications that provide certain functionality, e.g., ePrescribing, diagnostic assistance and the like. Security had rarely been a “feature” or considered as one. For these reasons, processes that clearly help define security requirements in the business requirements phase are an imperative. These include: • Application Risk Profiling: Within the context of your organization’s overall application portfolio, how risky is the application compared to others? What are the

key business risks and possible technical risks? E.g., are there stringent regulations that imply strong security? Does this application conduct transactions? Is it accessible via the Internet? Does the application store protected health information (PHI)? • Describe and confirm the high level security requirements: What high level data or information needs to be accessed? What is the context of the application within the current infrastructure? What application features will have an impact on security? • Determining possible use cases of the application: What is the potential range of users of the application? How might users interact with the application – remotely, via a VPN, over a wireless connection, through a browser? Will other web services or applications connect with the application?

8

Stage 3: Design In any SDLC, the application design is written to ensure robust functions and features are present. For imbuing security within the application, the design stage is also essential. The cost savings that are attributed to preventing security defects through correct design are an obvious indicator of its positive benefit in the lifecycle. In a seminal 2002 study entitled “The Economic Impacts of Inadequate Software Infrastructure Testing”, published by NIST, the large cost to organizations from inadequate software (security) testing were made crystal clear. The NIST estimates that the cost of eliminating a security defect in a production application is 30 times higher than if caught during the design stage. It is at the design phase that high-level application requirements are also converted to detailed technical specifications. The design phase should generate two distinct items: • A Threat Model that anticipates possible misuse of the application and a corresponding mitigation strategy to counteract such threats. • A Security Architecture Design in addition to an architectural design. Stage 4: Develop Consider a typical patient care portal. Using the Internet, a patient can log into her account, make an appointment for a regular bone-density test, and ask questions on calcium rich diets and supplements. Behind the curtain of this seemingly simple process, the portal will have to obtain real time access to multiple aspects of the hospital system, including her Electronic Health Record, admission, billing and discharge information as well as a gateway to the provider of lab tests. The portal’s business logic will connect the patient’s request through messaging systems to several operating systems, application servers, a web portal, web servers, network infrastructure, and user interfaces before acknowledging and completing the transaction. This is not easy. It is highly complex, requiring different pieces of software with many thousands or millions of lines of code to authenticate and trust each other and then deliver results in short time windows. According to software experts McGraw & Hoglund in their book “Exploiting Software”, most applications have between 5 and 50 coding mistakes in every 1000 lines of code. Thus within large applications – typically consisting of several million lines of code – thousands of security vulnerabilities may exist. To strengthen their coding environments, coding teams must: • Offer developers avenues to improve developer awareness and training in secure coding principles. In addition, these courses of study may be augmented by provision of secure coding standards, guidelines and frameworks for key languages and platforms. • Offer automated processes that facilitate source code reviews. While manual processes are possible to look at source trees, when dealing with large programs,

9

manual efforts quickly become cost prohibitive. In addition how code interacts with other pieces of code within the application, i.e., “data flow”, is not something suitable for manual checks. A third area that renders manual analysis ineffective is “control flow” analysis. In control flow analysis one must accurately track the sequencing of operations to prevent issues such as un-initialized variable use or a failure to enable parser validation. Without some kind of automation that allows code reviewers to look through large bodies of complex code, a code review is sub-optimal. Software tools that provide a level of automation and efficiency to meet cost considerations, scalability, and accuracy are now widely available and should be used. Stage 5: Test In health care, security testing has not been a prominent requirement during the software testing phase. Instead organizations typically focus on functional aspects, quality assurance (QA), load balancing, or user acceptance testing. Application tests for functionality and usability are typically conducted after the development stage. However, to ferret out coding errors while the code is being developed, security testing must be conducted during the coding phase. Additional security tests are also typically done at leading organizations after the application has been deployed. All of this necessitates that a test plan be conducted prior to the writing of code. This deviates slightly from traditional test approaches, but it is a vital requirement. A coherent security testing strategy requires interaction between the development, test and production or deployment teams. Developing a test plan right after the design phase ensures that the right security tests are developed and used from thought to finish. Security process improvements in the testing phase provides considerable benefits including: • Security integration with existing test bed environments. Most enterprise test environments use automated tools to perform functional, usability and QA testing. As security testing processes are embraced, software testers will be inclined to embrace automated security tools that link into their existing test beds. • The security related regression tests the testing group will conduct confirms the security view presented by the architecture and development teams. It will also present an added level of comfort to internal and external audit teams. • Cementing the link between developers and testers to produce quality software. As software processes get more complex, the cultural divide between testing groups and the developers prevalent today becomes easier to bridge.

Stages 6 and 7: Implement and Operate After the application has been initially deployed and is considered secure, what efforts are made to retain this security? How are interactions between the test, release and change management teams handled as modifications occur, not only to the application but to the underlying infrastructure? Without specific processes to keep security at the forefront during release management and regular maintenance, the efforts of the architects, developers and testers to keep the application safe, can be laid waste. Security processes can include scalability tests, penetration tests, or scanning for patches and are collectively named “Infrastructure Hardening”. Conducting Software Security Assurance (SSA) is not a one-time event. We have to be continuously diligent. As improved functionality through new software components gets added to the mix, we may be adding on new vulnerabilities. The ability to track and instrument our applications with how we are faring on reducing vulnerabilities needs continuous monitoring. Developers and software vendors need to judge themselves on how much they contribute to software security or insecurity. Therefore, for long term assurance, we must also provide a system for metrics to identify and quantify how well we are doing over time.

What to do about Applications Built by Third Parties? (e.g., System Integrators) Today, the government buys much of its software from outside vendors and uses large system integrators (SI’s) to manage projects. It relies on these SI vendors to extend functionality for existing applications or to develop entirely new code. These outside vendors may in turn rely on off-shore developers to generate code. The question for the government is, should we blindly trust these outside entities to produce working and secure code with few security vulnerabilities? Vendor risk management is a broad field with numerous elements and corollary services, e.g., analysis of vendor’s financial strength or even evaluating physical security. The procurement offices at various agencies have strict rules to assess measures of quality to make sure taxpayer money is being spent correctly. While these and other factors are important, we often find that software code delivered to the government does not usually require a certain, stable level of security for delivered application code. Yes, the SI or software vendor furnishes “working”, or functional, code to meet contractual obligations. But does this mean this code is secure? The vendors will often claim that “security” is hard to measure. We contend that in the 21st century this is

fallacy. With a bevy of available security tools, processes, and reporting measures (such penetration tests or source code reviews), one can quickly ascertain the relative “security health” of code. Further, we can compare this to other industries and even use simple established measures such as the Open Web Application Security Project’s (OWASP) top ten vulnerabilities list for comparison. So why are such checks not performed regularly? We believe that elements of application security are not usually fully considered by the procurement or business groups within government agencies or their SIs. Reasons for this can vary, but often it’s simply because the procurement or business groups are unaware of how to induce security requirements into the contracting process. Security issues in outsourced applications include: • Does the supplier have specific secure coding standards that its developers must adhere to? Have the supplier’s developers been provided with adequate security training? • Are there security parameters in the delivery instructions, installation records, and release package? • Does the acceptance test strategy have clearly defined security requirements? • Is there a review of the security related records before delivery? • How are security test results documented? • Are the security test results provided to the customer? • Is there a process for security based regression, unit, subsystem, and the entire system testing? • If outsourced, does the RFP request security interactions between contractor and sub-contractors? • Does the outsourcing contract allow the client to test (and reject if necessary) an application that does not meet a certain security standard? • Are there contractual obligations that allow the client to insist upon a third party security review prior to acceptance? Our government spends vast amounts of taxpayer money on third parties and system integrators to help build our software platforms in health care. It is critical that contracts, delivery milestones, and documentation reflect a very high level of software security. Third-party vendors owe it to us, and in 2010 it is now not difficult to do with a host of new technologies. Vendors must state how exactly they conduct checks for software security. If we find this wanting, as IT stewards for the government we should not accept such code or claim a breach of contract. Why should the government be caught holding the bag for poorly written software?

10

CONCLUSION Without adequate software security, we cannot rely on the processes and systems that support today’s advanced technology intensive world. Now with health IT transformation solidly underway, we cannot afford to make new mistakes. Insecure health information systems will result in rampant violations or breaches, the inevitable fines and sanctions and ruined reputations. More tragically, the promise of health IT to dramatically reduce health care costs and improve patient care will be in tatters. As citizens we simply cannot let this happen. We must design-in security with the right balance of processes, behavior changes, and technology controls. We need to start at the very core – with better software security.

ADDENDUM A Critical National Institute of Standards and Technology (NIST) Guidance to Federal IT Practitioners For federal IT professionals implementing and maintaining systems with Protected Health Information (PHI), the inference and mandate, in our opinion, is relatively clear. That is any information system which stores or transmits PHI might cause severe or catastrophic harm at least to individuals if breached. Thus all such systems should be categorized as having a potential impact of HIGH under Federal Information Processing Standards (FIPS) 199. Once a system is categorized, the agency responsible for the systems must then take the control steps provided in NIST Special Publication -53. Specifically, Appendix D of NIST-800-53 contains the security control baselines that represent the starting point in determining the security controls for low-impact, moderate-impact, and high-impact. For the purposes of this paper, which focuses on security weaknesses during software development and/or acquisition from a third party or integrator, the necessary NIST controls fall under the set of controls described for “System and Service Acquisition” (SA). Appendix D describes control, control enhancements and provides additional guidelines for 14 sets of controls in System and Service Acquisition. These 14 controls total over 11 pages of text, and should be considered mandatory reading by IT practitioners for federal systems.

11

Within these pages we wish to highlight several excerpts that relate directly to this paper. The highlighted yellow text, specifically, has clear pertinence for this paper. Control SA – 3: Life Cycle Support The organization: a. Manages the information system using a system development life cycle methodology that includes information security considerations; b. Defines and documents information system security roles and responsibilities throughout the system development life cycle; and c. Identifies individuals having information system security roles and responsibilities. Control SA – 4: Acquisitions ( Recommended Control Enhancement) The organization requires software vendors/ manufacturers to demonstrate that their software development processes employ state-of-the-practice software and security engineering methods, quality control processes, and validation techniques to minimize flawed or malformed software. Control SA – 8: Security Engineering Principles The organization applies information system security engineering principles in the specification, design, development, implementation, and modification of the information system. Examples of security engineering principles include, for example: a. developing layered protections; b. establishing sound security policy, architecture, and controls as the foundation for design; c. incorporating security into the system development life cycle; d. delineating physical and logical security boundaries; e. ensuring system developers and integrators are trained on how to develop secure software; f. tailoring security controls to meet organizational and operational needs; and g. reducing risk to acceptable levels, thus enabling informed risk management decisions.

Control SA – 10: Developer Configuration Management The organization requires that information system developers/integrators: a. Perform configuration management during information system design, development, implementation, and operation; b. Manage and control changes to the information system; c. Implement only organization-approved changes; d. Document approved changes to the information system; and e. Track security flaws and flaw resolution. Control SA – 11: Developer Security Testing The organization requires that information system developers/integrators, in consultation with associated security personnel (including security engineers): a. Create and implement a security test and evaluation plan; b. Implement a verifiable flaw remediation process to correct weaknesses and deficiencies identified during the security testing and evaluation process; and

About Fortify HP Fortify’s Software Security Assurance products and services protect organizations from the threats posed by security flaws in business- and mission-critical software applications. Our software security assurance suite, HP Fortify Software Security Center, drives down costs and security risks by automating key processes of developing and deploying secure applications.

About HP Enterprise Security HP is a leading provider of security and compliance solutions for modern enterprises that want to mitigate risk in their hybrid environments and defend against advanced threats. Based on market leading products from ArcSight, Fortify, and TippingPoint, the HP Security Intelligence and Risk Management (SIRM) Platform uniquely delivers the advanced correlation, application protection, and network defense technology to protect today’s applications and IT infrastructures from sophisticated cyber threats. Visit HP Enterprise Security at: www.hpenterprisesecurity.com

c. Document the results of the security testing/evaluation and flaw remediation processes. Control SA – 11: Developer Security Testing (Control Enhancement) The organization requires that information system developers/integrators employ code analysis tools to examine software for common flaws and document the results of the analysis.

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. All other product and company names may be trademarks or registered trademarks of their respective owners. ESP-BWP00X-MMDDYY-XX, Created Month 2011

Software Security for Health Care -- prepared for Hewlett-Packard by ...

There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Main menu.

299KB Sizes 0 Downloads 184 Views

Recommend Documents

Board for Licensing Health Care Facilities
May 1, 2017 - Residents shall be encouraged to achieve independence in activities of ..... access to the designated unit so that visitors and staff do not pass.

Board for Licensing Health Care Facilities
May 9, 2017 - handicap, will be excluded from participation in, be denied benefits of, or otherwise subjected to discrimination in the provision of any care or ...

Silergy Corp. Confidential-Prepared for -
START condition, the master sends 7-slave address bits and an R/W. —— bit during the ..... Soft start time programming and output voltage tracking: Connect a ...

Paper prepared for the ECES conference on - Egyptian Center for ...
the pattern of unemployment by sex, urban/rural location, age and education, the .... activity, unemployment has declined across the board in the 1998-2006 ..... the educational arena is the sharp rise in the proportion of technical high school ...

pdf-1860\patient-advocacy-for-health-care-quality-strategies-for ...
Try one of the apps below to open or edit this item. pdf-1860\patient-advocacy-for-health-care-quality-strategies-for-achieving-patient-centered-care.pdf.

Prepared Remarks - Carnegie Endowment for International Peace
Nov 4, 2010 - Page 1 ... They will also be based on my own analysis of the situation, .... to call by their true name, colonies, let the Israelis begin to build again, ...

Prepared Remarks - Carnegie Endowment for International Peace
Nov 4, 2010 - founder to help establish stronger international laws and ..... peace process is to attempt to harvest apples by cutting down the tree. Let us not ...

Prepared Remarks - Carnegie Endowment for International Peace
Nov 4, 2010 - And the targets sought by the Israeli forces seldom had any military value. Nearly half of Gaza's clinics were damaged as well as 15 of the 27 ...

Paper prepared for the ECES conference on - Egyptian Center for ...
graduates to remain unemployed while queuing for government jobs and ... from the labor force; (iii) the acceleration of employment growth in the private sector,.

Munki Software Management Tool for Macs, Security ... -
customers 'one-stop-shopping' and a go-to app instead of each product or ... 'Updates' tab, Self-service type interactions are on the first Software tab, and there ...

Munki Software Management Tool for Macs, Security ... -
apart from other tools that purport to deliver software to Macs. It comprises of a client ... App Store, which would in turn require an Apple ID. Munki runs as root ...

Engineering Safety- and Security-Related Requirements for Software ...
Feb 5, 2007 - the engineering discipline within systems/software engineering ..... Safety and. Security. Engineering. Event. Analysis. Danger. Analysis. Risk.

Security Techniques for attack resilient Software ...
Service Provider and the Hardware itself. In a Wi-Fi like ... This provide the authentication of the update provider as the ... Waveform Attestation. The general ...

ENGLISH STUDY MATERIAL PREPARED BY -
5 x 1 = 5 m. 6. My uncle Srinivas is a prosperous merchant. He lives in Madras. 7. Some people always expect the worst to happen. They are pessimists. 8. ..... CLASS X ENGLISH SUBJECT MATERIAL [page no‟s 51, 52 from Reader] ...... The physical cond

Health Care by Information Technology
While, all most Japanese people receives medical checkup. Medical checkup over 40 years old people employs breast X-ray, weight and height, blood pressure, eye test, blood test, and stomach X-ray. These data would be useful to find how to keep them h