Internal Medicine Journal 2002; 32: 91–99
REVIEW ARTICLE
Health services research: what is it and what does it offer? I. SCOTT1 and D. CAMPBELL2 1
Clinical Services Evaluation Unit, Princess Alexandra Hospital, Brisbane, Queensland, and 2Clinical Epidemiology and Health Services Evaluation Unit, Royal Melbourne Hospital, Melbourne,Victoria, Australia
Abstract Over the last 20 years, clinical medicine has witnessed rapid expansion in its underlying evidence base, greater demand for accountability in clinicians’ use of limited resources and increasing societal expectation for health care that confers proven benefit at reasonable cost to all eligible recipients.
the need for objective empirical analysis of the modern health system’s ability to deliver effective, efficient, equitable and safe care and to further the health and well-being of whole populations. In this article we provide an overview of the aims, methods and outputs of this burgeoning new discipline. (Intern Med J 2002; 32: 91–99)
Health services research, also referred to as the clinical evaluative sciences, has grown in response to
Key words: effectiveness, efficiency, health services, quality, research.
INTRODUCTION
DEFINING HEALTH SERVICES RESEARCH
The issue of accountability has become a dominant theme in health care.1 Policy-makers, managers, health professionals and the public have all come to recognize that, despite best intentions, suboptimal care exists within the health-care system.2–5 The resultant human and financial cost has prompted concerted efforts to address the problem6 using methods that are integral to the paradigms of evidence-based medicine,7 clinical informatics,8 and quality improvement.9 Such is the environment that has given birth to the applied science called health services research. In this article we review the aims and methods of this science, and illustrate some of its practical applications to the practice of consultant physicians.
Correspondence to: Dr Ian Scott, Director of Internal Medicine, Level 5B, Princess Alexandra Hospital, Ipswich Road,Woolloongabba, Brisbane, Queensland 4102, Australia. Email:
[email protected] Received 2 February 2001; accepted 12 September 2001.
While many all-inclusive definitions exist, health services research is defined here simply as the scientific study of the tasks, resources, activities and results of clinical practice and health services (Fig. 1). As the definition implies, it analyses many aspects of care and, in so doing, embraces a diversity of disciplines (Table 1). At the risk of oversimplification, health services research investigates three basic, but interrelated, dimensions of care: (i) the process of deciding what care to provide, (ii) the process of providing care in the best possible manner and (iii) the outcomes that result from care. Many health service research projects study aspects of care that span all three dimensions, under the rubric ‘quality of care’.10 This frequently used phrase is rarely defined, but encompasses notions of effectiveness, efficiency, safety, access and consumer satisfaction. Equally enigmatic is the proposed distinction between manager-driven ‘top-down’ health services research and clinician-driven ‘bottomup’ clinical practice research.11 We contend that such
92
Scott & Campbell
Figure 1 The continuum of health-related research. Table 1
Disciplines related to health services research
Clinical epidemiology Biomedical statistics Biomedical research Operational research Information science Clinical audit and utilization review Health technology assessment Clinical managerialism Organizational dynamics Health psychology Medical sociology Health economics Public health Clinical ethics and health law
distinctions centre on issues of stewardship of this field of operational research rather than on true differences in aims and methods, and that the phrases are essentially interchangeable.
EFFECTIVENESS AND APPROPRIATENESS OF CARE Effectiveness research Effectiveness research asks, What is the right thing to do? (i.e. What care confers significant health benefit for a given clinical situation?) The best arbiter of the effectiveness of different management strategies is evidence from pragmatic, randomized clinical trials (RCTs).12 However, in situations where RCTs are not feasible or appropriate, observational studies which use quasi-experimental methods may have to suffice.13 If well performed, the latter often provide estimates of effect equivalent to those of RCTs,14,15 and assist in the evaluation of effectiveness (effects under real-world conditions) as distinct from efficacy (effects under ideal trial conditions). In this context,
Internal Medicine Journal 2002; 32: 91–99
health services research seeks to inform health-care policy-making by profiling the resource requirements and health effects of disease prevention and management strategies as applied to large-scale populations and organizations.16,17 Example: A systematic review of all RCTs of the effects of acute stroke units (ASUs), published in 1995, concluded that formally organized multidisciplinary hospital units resulted in fewer deaths and lower levels of institutionalization and disability following acute stroke, compared to usual care in general wards.18 These results have prompted the establishment of ASUs within all large- and mediumsized hospitals.
Implementation research Clinical effectiveness is also influenced by the extent to which definitive research evidence is implemented in routine clinical decision-making. Implementation research aims to identify robust methods for systematically integrating evidence with practice, thereby assisting clinicians and managers to narrow the ‘evidence–practice’ gap.19 Example: Systematic reviews have concluded that traditional educational strategies, such as dissemination of printed materials and lecture-based meetings, have little or no effect on practice.20 More effective methods include: (i) interactive, case-based meetings with small groups, (ii) academic detailing, (iii) decision support systems (such as prompts and reminders), (iv) audit and feedback and (v) patient reminders.
Appropriateness research Appropriateness research asks, Was the most appropriate thing done given the clinical circumstances?
Health services research This question covers issues of overuse, underuse or misuse of interventions. Overuse is of particular concern given limited resources, the proliferation of technology and the increasing potential for careinduced harm, especially in older persons.21 Appropriateness studies also compare variations in observed practice with (i) ‘best practice’ standards that are based on definitive evidence or, where this is lacking (the ‘grey zones’ of practice22), (ii) expert opinion distilled by formal group processes.23 However, within the ‘grey zones’ of practice, methods for distinguishing appropriate care from inappropriate care will inevitably invoke value judgements and raise questions about reproducibility.24 Accordingly, results of appropriateness studies should be used to flag potential problem areas that warrant more detailed analysis by those directly involved. Examples: On the basis of literature review and expert panel consensus, a review of the appropriateness of coronary angiography, upper gastrointestinal endoscopy and carotid endarterectomy (performed on 4564 patients in the USA throughout 1988) concluded that, respectively, 23%, 24% and 64% of these procedures were not indicated.25 Recent Victorian data revealed substantial (up to fivefold) variation in the age-adjusted rates of use of these same procedures throughout the state. This variation is unexplained by differences in casemix or geographical access to services.4
CLINICAL PRACTICE PERFORMANCE Performance research asks, Was the right thing done well? (e.g. in the right way, at the right place, at the right time, with the best use of resources). Study factors include timely access to care, efficient delivery of care (who, what, when and how) and issues of safety and technical quality. In recent times, the science of industrial quality management has been championed within the health-care sector.26–28 It aims to improve the total system of care by collective action, as opposed to simply identifying and removing poorly performing individuals (or ‘bad apples’). The agenda of what is now termed continuous practice improvement (CPI) has broadened to include issues of effectiveness and outcome as well as quality of performance. The core method of CPI has been the quality improvement cycle.27 This involves: (i) determining ‘best practice’ standards using formal methods, (ii) identifying areas of practice with potential for improvement, (iii) defining and measuring the processes of care, (iv) ascertaining gaps between
93
observed and ‘best practice’ and (v) instituting remedial strategies (with many including clinician feedback), evaluating their effects and modifying them as necessary. Clinical pathways, clinical audits, performance indicators, statistical process control and incident analysis are examples of this approach.29–31 As well as seeking to improve patient outcomes CPI is driven, in part, by a quest for greater efficiency inherent in casemix-based funding.32 Example: A quasi-experimental study spanning more than 2 years evaluated the effects of a CPI intervention on the appropriateness of laboratory testing involving patients with acute myocardial infarction.33 The intervention comprised laboratory testing guidelines, education programmes and evaluative feedback. The proportion of clinically indicated tests that were requested increased from 77% to 88% in the experimental group following intervention, while the number of non-clinically indicated tests decreased by 82% (P < 0.01). No changes were seen in the controls. Although CPI methods resemble those of traditional clinical science,34 and despite most having demonstrated favourable effects,35 rigorously evaluated clinical applications of CPI are relatively few in number. Low levels of clinician buy-in are purported to be the main reason, reflecting (i) clinician scepticism about the apparent emphasis on cost reduction at the expense of quality of care, (ii) perceived irrelevance of CPI to care of individual patients, (iii) insufficient time and resources, (iv) paucity of credible clinical data, (v) lack of peer and managerial support, (vi) perceived loss of clinical autonomy and (vii) excessive use of jargon and zealotry.36 More research is required to: (i) determine preconditions for greater clinician involvement in CPI and (ii) determine how to measure more accurately and report clinical performance in ways that are acceptable and actionable to clinicians.36,37 The reliable identification of quality of care problems involving clinicians is also methodologically challenging.38 Applying explicit, evidence-weighted process of care criteria to both self-reported and actual practice is one way of imparting greater objectivity to such investigations.39,40 Analysis of serious and unexpected adverse events (sentinel events) pinpoints which areas of practice are in need of quality efforts.41 In contrast, recent publicity given to ‘report card’ measures of quality (such as hospital mortality rates) may not be justified, given their insensitivity and lack of predictive value.42
Internal Medicine Journal 2002; 32: 91–99
94
Scott & Campbell
ASSESSING ACCESS TO INDICATED CARE
surveys demonstrated apparent lack of understanding of symptoms and management of CHF.
As a quality of care indicator, underuse of indicated interventions raises as much concern as overuse. Using validated techniques,43 health services research attempts to identify situations where patients with clear-cut clinical needs are denied indicated care on the basis of: (i) ethnicity, (ii) gender, (iii) socioeconomic status, (iv) geographical isolation or (v) inadequate service levels. Such analysis informs both the planning and resourcing of future health services as well as exposing discriminatory practices that need to be curtailed.44
DECIDING ALLOCATION OF RESOURCES
Example: A Canadian study of 35 000 patients with acute myocardial infarction revealed that increases in median income from lowest to highest quintiles were associated with a 23% increase in rates of coronary angiography, a 45% decrease in waiting times and a 25% decrease in mortality at 1 year.45 These findings were independent of differences in patient age, sex and disease severity and in the characteristics of admitting hospitals.
ASSESSING OUTCOMES OF CARE AND PATIENT PREFERENCES Outcomes research asks, Was the outcome of care satisfactory from both the clinician’s and patient’s point of view? Particular emphasis is given to patientcentred outcomes: to evaluating effects of care on quality of life (QOL) (as well as on frequency of discrete clinical events) and to eliciting patients’ level of satisfaction with care received.46,47 This research aims to produce valid, standardized outcome measures that apply to individuals and populations over extended time frames.48 Disease registries and clinical databases that include QOL measures are useful sources of outcome data, but more are needed and require adequate long-term resourcing.49 Such research also seeks to determine whether current practice and policies are aligned with patient preferences and are achieving outcomes desired by patients.50 Example: A prospective cohort study of patients hospitalized with congestive heart failure (CHF) revealed a mortality rate of 17% and readmission rate for recurrent CHF of 12% over a 4-month follow-up period.51 Survivors’ mean QOL scores (measured using a widely used QOL instrument, SF–36) were considerably lower for all subscales compared with normative data for Australian men and women of similar age. A considerable burden of care was imposed on patients’ carers and community services. Patient knowledge
Internal Medicine Journal 2002; 32: 91–99
Deciding how best to use limited resources when managing various disease conditions requires an evaluation of: (i) disease prevalence, (ii) the direct and indirect costs of care, (iii) outcome probabilities of specific interventions and (iv) patient or societal valuation of specific outcomes. Such studies – using explicit decision analytic methods and econometric modelling – inform and improve resource allocation decisions.52 Efforts to refine these techniques further are a priority for health services research. Example: Magnetic resonance imaging (MRI) is a multimillion dollar industry in Australia, with scant research to help guide its proper role and use. Patients with neurological syndromes constitute the majority of referrals. A recent cost-effectiveness analysis showed that, even with 30% pretest likelihood of neurological disease, MRI use had an incremental cost of $US101 670 for each quality-adjusted life-year (QALY) saved, compared to $US20 290 for computed tomography scans.53 Only when the disease probability equalled or exceeded 80% did MRI become a costeffective alternative to CT scans, at an incremental cost below $25 000 per QALY saved.
EVALUATING EFFECTS OF ORGANIZATIONAL RE-ENGINEERING The last decade has seen major changes in the structures and dynamics of health-care organizations. Changes that have impacted on service delivery include: (i) regionalization, corporatization, privatization, downsizing and outsourcing of health services, (ii) devolved budgeting and (iii) purchaser/provider splits.54 In addition, the traditional roles and functions of the teaching hospital are being redefined by the advent of: (i) casemix-based funding, (ii) medical informatics and telemedicine, (iii) day patient and ambulatory care, (iv) shared hospital-community care arrangements and (v) interdisciplinary groupings.55 There is an urgent need for more research into how these changes impact on quality of care and patient outcomes.56 The administration and planning of costly hospital services are too often based on insufficient research or policy analysis.57 Examples: Analysis in the UK of the effects of computerization of National Health Scheme hospitals –
Health services research
95
estimated to have cost £220 million per year since 1991 – suggests that the return on investment in terms of improved patient care is marginal, despite savings from increased administrative efficiency.58 The costly implementation of large-scale clinical information systems59 (including digital radiology60) has not yet delivered significant enhancements in quality of care or patient outcomes on the basis of proper analysis. In contrast, more circumscribed applications – such as computerized test ordering and drug prescribing, linked with decision support and incident monitoring – have been successful in reducing levels of inappropriate care.61
Another methodological challenge is the limited opportunity to collect baseline measurements or undertake exploratory analyses before new forms of care are introduced.68 New forms of care often change and evolve over time in response to changing circumstances (some self-induced) and reflect local settings.69 Thus the ‘intervention’ is neither fixed nor free of external influences and attribution of subsequently realized benefits (or null effects) to specific interventional components can be a hazardous affair.70 Attempts to replicate such ‘black box’ changes in other sites must then confront culture and practice norms that are different to those seen in the study settings.
At the more fundamental level of group (or organizational) psychology, the hidden determinants of practice patterns and professional norms need to be explored if organized health care is to devise and adopt new CPI techniques.62
In obtaining answers to certain questions, the successful health service researcher must be adept at using qualitative as well as quantitative methods.71 Qualitative methods are necessary in profiling provider and patient experiences of health care and in suggesting determinants of behaviour.72 Such insights help generate testable hypotheses and experimental interventions that have ‘real world’ implications.
Example: A recent ethnographic study showed that junior medical staff working in teaching hospitals are more likely to order pathology tests on the basis of informal, unwritten protocols conveyed by senior clinicians (‘folklore of the service’) than on formal guidelines, manuals or algorithms.63
THE CHALLENGES OF HEALTH SERVICES RESEARCH While results of RCTs constitute high-level evidence of interventional efficacy, this study design can be difficult to apply when evaluating the usefulness of established health-care technologies. Evaluating the utility of diagnostic testing is a particular challenge given the potential implications of diagnostic test utilization and ‘downstream’ interventions (appropriate and inappropriate) for health-care budgets.64 There is also a need to assess the impact of diagnostic tests on health outcomes following widespread implementation.65 Moreover, many policy decisions at macro- (healthcare system) and meso- (institutional) level involve changes in organizational structures, modes of practice and personnel skills and are not readily amenable to randomization and blinding.66 Instead, the effects of such changes are evaluated as ‘natural experiments’, before-and-after comparisons and matched group or case–control studies. In the absence of control groups, confounding factors may invalidate proposed cause and effect relations. Various adjustment methods can be applied to large observational databases to allow for such confounding,67 but analyses may still be flawed by the inaccuracy or incompleteness of routinely collected administrative data.
A multiplicity of other factors (Table 2) further conspire to make health services research more difficult to conduct than biomedical research. Another deterrent is the low level of research funding allocated to such research compared to other types of medical research. Only 6% of the $A165 million of research funding dispensed by the National Health and Medical Research Council in 1998 was allocated to public health and health services research.73 The final challenge is ensuring that the potentially useful outputs of health services research are integrated into clinical practice, policy and management.
CONTRIBUTIONS OF HEALTH SERVICE RESEARCH – PAST, PRESENT AND FUTURE Health services research has provided discernible contributions to knowledge in the context and configuration of: (i) health services and clinical practice, (ii) resource issues, (iii) problems in the delivery and provision of services and (iv) the identification of needs and outcomes related to health status.74 Methodological research has led to improved ways of: (i) defining appropriateness of care, (ii) measuring process and outcomes of care, (iii) developing clinical indicators and (iv) comparing strategies for using quality data to effect care reform.75 Several publications are educating clinicians and managers about health service research methods and their clinical applications.76–78
Internal Medicine Journal 2002; 32: 91–99
96 Table 2
Scott & Campbell Challenges in performing health services research
Interactional issue Intrusiveness People likely to have to contribute to the research effort need to be identified and involved from the beginning in study design and operationalization. This frequently requires multidisciplinary participation. Perceived threat Research needs to be perceived as not trying to find mistakes, or apportion blame, but to identify system-wide ways of meeting the common goal of delivering care of even higher quality. Need to involve other research disciplines If research is to truly enlighten and achieve its objectives, specialists from a variety of disciplines (such as those listed in Table 1) need to be consulted and involved in study design. Cultural barriers Training in biomedical sciences and clinical reasoning at the level of individual patients may limit clinicians’ capacity to ask whether the organization of health services and the processes of care, at a population level, are as effective and efficient as they could or should be. Time constraints Clinicians as well as policy makers often want research results quickly, particularly those relating to service planning or ‘politically sensitive’ issues of quality. Studies may also be designed or conducted in haste simply to pre-empt policy decisions which have already been decided. Multiplicity of stakeholders The findings of clinical research can be implemented by personal decisions of individual clinicians. The findings of health services research, however, often involve effort by multiple players, whose level of enthusiasm or cooperation may vary. Methodological issues Randomization Randomizing and blinding large organizational units such as hospitals to intervention and control groups poses major logistical challenges. Before-and-after evaluations are more commonly used but bring problems of confounding bias. Lack of control Observing the multifaceted, unstructured landscape of clinical practice is integral to health services research, which is in contrast to clinical trials and laboratory-based research wherein all or most of the variables apart from the study factor can be controlled. Hawthorne effects and social-response bias The very act of observing people (clinicians, patients, managers) may cause them to change behaviour, often towards what they perceive as being socially desirable. This change in behaviour may be independent of the effects of specific interventions. Process measurement Processes of care that can be complex and intuitive need to be defined in a way that renders them capable of reliable, standardized measurement. Outcome measurement In addition to ‘hard’ end-points (mortality, discrete events), outcomes such as quality of life or disease-specific functional measures need to be considered. Validated and reliable instruments used for this purpose may be perceived as intrusive or unhelpful by practising clinicians. Generalizability Studies conducted in a single site or setting may not generalize to other sites or settings, particularly if components of the study intervention are highly idiosyncratic and dependent on clinical culture which may not be readily transferable. Reductionism Like most science, health services research can also be accused of taking as its subject a single condition or process that represents only one aspect of care delivery. The interrelatedness of many aspects of care delivery limits the ability of a single study to determine all or most of the key elements conducive to effective health-care reform.
Internal Medicine Journal 2002; 32: 91–99
Health services research In Australia and New Zealand, this type of research is being increasingly recognized as an important health discipline. For example: (i) an inaugural, international conference of health service researchers was convened in Sydney in 1999 and the first Asia-Pacific Quality Improvement in Healthcare Forum was held in the same city in 2001, (ii) the Commonwealth has funded the Evidence-Based Clinical Practice Research Initiative,79 (iii) up to 15 research centres dedicated to improving clinical effectiveness have been commissioned in tertiary hospitals, (iv) training courses in health service research exist in various universities and (v) the Commonwealth has recently distributed funds to the States to undertake quality improvement projects, the design and conduct of which will require health service research expertise.81 Importantly, The Royal Australasian College of Physicians has obtained Commonwealth funds to conduct the Clinical Support Systems Project. This consists of clinicianled projects in four different sites, aimed at developing, applying and evaluating systematic methods for improving quality of care for patients with defined clinical conditions.81,82
CONCLUSION The potential of health services research to make further useful contributions to health-care reform will continue to grow.83 Areas of need are: (i) health technology assessment (especially new and rapidly evolving technologies), (ii) service delivery and organization (with greater attention to organizational psychology, clinical sociology and the impact of clinical informatics), (iii) evaluation and enhancement of clinical performance and (iv) resource allocation. If the value of such research is to be maximized, it requires (i) a priority-driven research agenda,84 (ii) commitment from a wide range of stakeholders (including professional colleges and state health authorities), (iii) explicit funding, (iv) dedicated training schemes, (v) research programmes at regional level and (vi) proactive consideration of its results and recommendations on the part of clinicians, managers and policy-makers. In time, health services research will come to be seen as important as biomedical research to the advancement of clinical practice and the people it serves.
REFERENCES 1 Relman AS. Assessment and accountability: the third revolution in medical care. N Engl J Med 1988; 319: 1220–2. 2 Wilson RM, Runciman WB, Gibberd RW et al. The Quality Aust Health Care Study. Med J Aust 1995; 163: 458–71.
97
3 Roughead EE. The nature and extent of drug-related hospitalisations in Australia. J Qual Clin Prac 1999; 19: 19–22. 4 Richardson J. The health care financing debate. In: Mooney G, Scotton R, eds. Economics and Australian Health Care Policy. Sydney: Allen & Unwin; 1998; 192–213. 5 Senes-Ferrari S. Coronary angioplasty in Australia. Australian Institute of Health and Welfare and National Heart Foundation, 1995 AIHW cat, no. CVD5. Canberra: AIHW (Cardiovascular Disease Series no. 8); 1999. 6 National Expert Advisory Group on Safety and Quality in Australian Health Care. Implementing quality and safety improvement in Australian health care. Final report. Canberra: Department of Health and Aged Care; 1999. 7 Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA 1992; 268: 2420–5. 8 Goiera E. Medical informatics. BMJ 1995; 310: 1381–7. 9 Kenagy JW, Berwick DM, Shore MF. Service quality in health care. JAMA 1999; 281: 661–5. 10 Brook RH, Lohr KN. Efficacy, effectiveness, variations and quality. Boundary-crossing research. Med Care 1985; 23: 710–22. 11 McDonald IG, Daly JM. The anatomy and relations of evidence-based medicine. Aust NZ J Med 2000; 30: 385–92. 12 Yusuf S, Collins R, Peto R. Why do we need some large, simple randomized trials? Stat Med 1984; 3: 409–22. 13 Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ 1996; 312: 1215–18. 14 Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med 2000; 342: 1878–86. 15 Concato J, Shah N, Horwitz RI. Randomized controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med 2000; 342: 1887–92. 16 Appleby J, Walshe K, Ham C. Acting on the evidence. A review of clinical effectiveness: sources of information, dissemination and implementation. Birmingham: National Association of Health Authorities and Trusts (NAHAT); 1995. Research Paper Number 17. 17 Ellrodt G, Cook DJ, Lee J et al. Evidence-based disease management. JAMA 1997; 278: 1687–92. 18 Stroke Unit Trialists’ Collaboration. Collaborative systematic review of the randomised trials of organised inpatient (stroke unit) care after stroke. BMJ 1997; 314: 1151–9. 19 Grol R, Grimshaw J. Evidence-based implementation of evidence-based medicine. Jt Comm J Qual Improv 1999; 25: 503–13. 20 Bero LA, Grilli R, Grimshaw JM et al. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. BMJ 1998; 317: 465–8. 21 Fisher ES, Welch HG. Avoiding the unintended consequences of growth in medical care. How might more be worse? JAMA 1999; 281: 446–53. 22 Naylor CD. The ‘grey zones’ of clinical practice: some limits to evidence-based medicine. Lancet 1995; 345: 840–2. 23 Black N, Murphy M, Lamping D et al. Consensus development methods. A review of best practice in creating clinical guidelines. J Health Serv Res Policy 1999; 4: 236–48. 24 Shekelle PG, Kahan JP, Bernstein SJ et al. The reproducibility of a method to identify the overuse and underuse of medical procedures. N Engl J Med 1998; 338: 1888–95. 25 Brook RH, Park RE, Chassin MR et al. Predicting the appropriate use of carotid endarterectomy, upper
Internal Medicine Journal 2002; 32: 91–99
98
26 27
28
29
30
31
32
33
34 35
36
37 38
39
40
41 42
43
44
45
Scott & Campbell gastrointestinal endoscopy, and coronary angiography. N Engl J Med 1990; 323: 1173–7. Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med 1989; 320: 53–6. Rhew E. Quality improvement project reviews: a tool to accelerate the transformation. Jt Comm J Qual Improv 1994; 20: 79–89. Palmer RH, Adams ME. Quality improvement/quality assurance: a framework in putting research to work in quality improvement and quality assurance. Agency for Health Care Policy and Research (93–0034), 1993. Layton A, Moss F, Morgan G. Mapping out the patient’s journey. Experiences of developing pathways of care. Qual Health Care 1998; 7: S30–S36. Thomson MA, Oxman AD, Davis DA et al. Audit and Feedback to Improve Health Professional Practice and Health Care Outcomes (Parts 1 and 2) (Cochrane Review) In: The Cochrane Library, Issue 1. Oxford: Update software; 1999. Van der Bij JD, Vissers JM. Monitoring health-care processes: a framework for performance indicators. Int J Health Care Qual Assur 1999; 12: 214–21. MacIntyre CR, Brook CW, Chandraraj E, Plant AJ. Changes in bed resources and admission patterns in acute public hospitals in Victoria, 1987–95. Med J Aust 1997; 167: 186–9. Isouard G. A quality management intervention to improve clinical laboratory use in acute myocardial infarction. Med J Aust 1999; 170: 11–14. Berwick DM. The clinical process and the quality process. Qual Manag Health Care 1992; 1: 1–8. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q 1998; 76: 593–624. Weiner BJ, Shortell SM, Alexander JA. Promoting clinical involvement in hospital quality improvement efforts: the effects of top management, board and physician leadership. Health Serv Res 1997; October: 491–510. Gates PE. Clinical quality improvement: getting physicians involved. QRB Qual Rev Bull 1993; 19: 56–61. Hofer TP, Hayward RA, Greenfield S et al. The unreliability of individual physician ‘report cards’ for assessing the costs and quality of care of a chronic disease. JAMA 1999; 281: 2098–105. Peabody JW, Luck J, Glassman P et al. Comparison of vignettes, standardised patients, and chart abstraction. A prospective validation study of 3 methods for measuring quality. JAMA 2000; 283: 1715–22. Ashton CM, Kuykendall DH, Johnson ML et al. A method of developing and weighting explicit process of care criteria for quality assessment. Med Care 1994; 32: 755–70. Wolff AM, Bourke J. Reducing medical errors: a practical guide. Med J Aust 2000; 173: 247–51. Thomas JW, Hofer TP. Accuracy of risk-adjusted mortality rate as a measure of hospital quality of care. Med Care 1999; 37: 83–92. Kravitz RL, Laouri M. Measuring and averting underuse of necessary cardiac procedures: a summary of results and future directions. Jt Comm J Qual Improv 1997; 23: 268–76. Fiscella K, Franks P, Gold MR, Clancy CM. Inequality in quality. Addressing socioeconomic, racial, and ethnic disparities in health care. JAMA 2000; 283: 2579–84. Alter DA, Naylor CD, Austin P, Tu JV. Effects of socioeconomic status on access to invasive cardiac procedures
Internal Medicine Journal 2002; 32: 91–99
46 47
48
49 50 51
52
53
54
55 56
57
58 59
60
61
62
63
64 65 66
and on mortality after acute myocardial infarction. N Engl J Med 1999; 341: 1359–67. Ellwood PM. Outcomes management: a technology of patient experience. N Engl J Med 1988; 318: 1549–56. Gerteis M, Edgman-Levitan S, Daley J, Delbanco TL, eds. Through the Patient’s Eyes. Understanding and Promoting Patient-Centred Care. San Francisco: Jossey-Bass; 1993. Wright JC, Weinstein MC. Gains in life expectancy from medical interventions – standardizing data on outcomes. N Engl J Med 1998; 339: 380–6. Black N. Developing high quality clinical databases. BMJ 1997; 315: 381–2. Asch AD, Hershey JC. Why some health policies don’t make sense at the bedside. Ann Intern Med 1995; 122: 846–50. Blyth FM, Lazarus R, Ross D et al. Burden and outcomes of hospitalisation for congestive heart failure. Med J Aust 1997; 167: 67–70. Drummond MF, O’Brien B, Stoddart GL et al. Methods for the Economic Evaluation of Health Care Programmes, 2nd edn. Oxford: Oxford University Press; 1997. Mushlin AI, Mooney C, Holloway RG et al. The costeffectiveness of magnetic resonance imaging for patients with equivocal neurological symptoms. Int J Technol Assess Health Care 1997; 13: 21–34. Brownell MD, Roos NP, Burchill C. Monitoring the impact of hospital downsizing on access to care and quality of care. Med Care 1999; 37: JS135–JS150. Braithwaite J, Vining RF, Lazarus L. The boundaryless hospital. Aust NZ J Med 1994; 24: 565–71. Aiken LH, Sochalski J, Lake ET. Studying outcomes of organisational change in health services. Med Care 1997; 35: NS6–NS18. Edwards N, Harrison A. Planning hospitals with limited evidence: a research and policy problem. BMJ 1999; 319: 1361–3. Lock C. What value do computers provide to NHS hospitals? BMJ 1996; 312: 1407–10. Peel V, Loeb J, Atkinson C et al. Evaluating a large-scale HIS implementation and its value for hospital resource management. In: Richards B, ed. Current Perspectives in Healthcare Computing: Conference Proceedings. Weybridge: British Journal of Healthcare Computing 1993; 309–20. Wild C, Peissl W, Tellioglu H. An assessment of picture archiving and communication systems (PACS). The case study of the SMZO Project. Int J Technol Assess Health Care 1998; 14: 573–82. Hunt DL, Haynes RB, Hanna SE et al. Effects of computerbased clinical support systems on physician performance and patient outcomes. A systematic review. JAMA 1998; 280: 1339–46. Garside P. Organisational context for quality: lessons from the fields of organisational development and change management. Qual Health Care 1998; 7: S8–S15. Enno A, Mondy P, Kerridge I, Pearson S. Investigating pathology utilisation by junior medical staff in a teaching hospital: a qualitative study. Aust NZ J Med 2000; 30: 261–3. Verrilli D, Welch G. The impact of diagnostic testing on therapeutic interventions. JAMA 1996; 275: 1189–91. Mackenzie R, Dixon AK. Measuring the effects of imaging: an evaluative framework. Clin Radiol 1995; 50: 513–18. Balas EA, Austin SM, Ewigman BG et al. Methods of randomised controlled clinical trials in health services research. Med Care 1995; 33: 687–99.
Health services research 67 Iezzoni LI, ed. Risk Adjustment for Measuring Healthcare Outcomes, 2nd edn. Chicago: Health Administration Press; 1997. 68 Rogers EM. Diffusion of Innovations, 4th edn. New York: Free Press; 1995. 69 Bradley F, Wiles R, Kinmonth A-L et al. Development and evaluation of complex interventions in health services research: case study of the Southampton heart integrated care project (SHIP). BMJ 1999; 318: 711–15. 70 Cook TD, Campbell DT.Quasi-Experimentation. Design and Analysis Issues for Field Settings. Chicago: Rand McNally; 1979. 71 Sackett DL, Wennberg JE. Choosing the best research design for each question. BMJ 1997; 315: 1636. 72 Pope C, Mays N. Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311: 42–5. 73 Health and Medical Research Strategic Review. The virtuous circle: working together for health and medical research. Canberra: Commonwealth of Australia. 1998 [cited 2000 December 2] Available from: URL: http://www.health.gov.au/hmrsr/ 74 Leatherman S, ed. Health Services Research: an Anthology. New York: Pan American Health Organisation; 1992. 75 Phelp C. The methodologic foundations of studies of appropriateness of health care. N Engl J Med 1993; 329: 1241–5.
99
76 Crombie IK, Davies HT. Research in health care. Design, Conduct and Interpretation of Health Services Research. New York: John Wiley and Sons; 1996. 77 Shi LS.Health Services Research Methods. Boston: Delmar Publishers; 1996. 78 Black N, Brazier J, Fitzpatrick R, Reeves B, eds. Health Services Research Methods. A Guide to Best Practice. London: BMJ Books; 1998. 79 Phillips PA, Kelly S, Best J. Implementing and sustaining evidence-based clinical practice in Australia: the Evidence Based Clinical Practice Research Initiative. J Eval Clin Pract 1999; 5: 163–8. 80 Ukoumunne OC, Gulliford MC, Chinn S et al. Methods in health service research. Evaluation of health interventions at area and organisation level. BMJ 1999; 319: 376–9. 81 Larkins RG, Long P, Patterson CG. The Clinical Support Systems Program concept: what is it and where did it come from? Int Med J 2001; 31: 416–17. 82 The Clinical Support Systems Project. Details available at: http://www.racp.edu.au/hpn/cssp/index.htm. 83 Lilford RJ. Health services research – what it is, how to do it, and why it matters. Health Serv Man Res 1994; 7: 214–19. 84 Carson N, Ansari Z, Hart W. Priority setting in public health and health services research. Aust Health Rev 2000; 23: 46–57.
Internal Medicine Journal 2002; 32: 91–99