Research Evaluation, volume 13. number 2, Augusi 2004. pages 129-140, Beecii Tree Publishing. 10 Waiford Close. Guildford. SufTey GUI 2EP. England

Valuation of intangibles Intellectual capital reporting for universities conceptual background and application for Austrian universities Karl-Heinz Leitner

A university's most valuable resources are its researchers and students with their relations and organisational routines; their most important output is knowledge. These resources can be interpreted as intangible assets or intellectual capital, even though the term has so far not been used within the context of universities. Intellectual capital (IC) reporting seems to be a rich source for the development of management and reporting systems for universities. IC reports deliver information about the development and productive use of investments in intangible assets. The Austrian Ministry for Education, Science and Culture has developed an IC model that incorporates the knowledge-production process of universities.

Karl-Heinz Leitner is at the Department of Technology Policy, ARC systems research GmbH, A-2444 Seibersdorf; tel: 0043 50550 3894; fax: 0043 50550 3888; email: karl-heinz. lei tner@ arcs, ac.at This is a revised version of a paper presented at the conference 'The Transparent Enterprise. The Value of Intangibles', Autonomous University of Madrid and Ministry of Economy, 25-26 November 2002, Madrid, Spain.

Research Evaluation August 2004

I

N MOST EUROPEAN countries, universities are faced with various new challenges caused by political initiatives as well as new management modes proposed for universities. European-wide political activities, such as the Bologna declaration, aim to achieve greater compatibility and comparability of the higher education and research sector (Salamanca, 2001). The idea of New Public Management has influenced the way universities should be governed and managed in recent years and goes hand in hand with more autonomy regarding the organisation and budget allocation. With its inherent logic of an output and performance orientation, public organisations have thus to introduce new budget allocation mechanisms based on new performance measures. New Public Management and performance management systems have already been successfully implemented in many areas of the public sector and occasionally by universities. As organisations which are mainly financed by public funds, universities are also confronted with an increased demand by the 'owners' and citizens for transparency regarding the use and outputs achieved with those funds (Schenker-Wicki, 1996). This call for public accountability requires the disclosure of the social and economic outcomes of universities. These political and managerial challenges require the implementation of new management and reporting systems within many European universities. Universities do not only have to allocate budgets based on new performance criteria but they also have to manage effectively the resources on which the performance is based. Obviously, the most valuable

0958-2029/04/020129-12 US$08.0G © Beech Tree Publishing 2004

129

Intellectual capital reporting

resources of universities are their researchers and students with their relationship networks as well as their organisational routines. These resources can be interpreted as intangible assets or intellectual capital, even though the term has so far not heen used within the context of universities. Maul (2000), Bueno (2002), Rodriguez et al (2002) and Leitner and Warden (2004) are among the first to deal with this topic in the context of universities and research organisations. They assume that the management of intellectual capital is crucial for the performance of universities and research organisations and deliver some of the first classifications, models and indicators for intellectual capital management and valuation. These authors thus adopted a movement which emerged in the 1990s in the industry. Due to the increased investments in intellectual capital — such as employees' training, innovation, research and customer relationships — some private companies have implemented intellectual capital (IC) management and reporting systems. These systems deliver the necessary information for managers and external stakeholders to take decisions with respect to these investments. In recent years the instrument of IC reporting has also been adopted by some innovative public and private research organisations. Among others the Austrian Research Organisation ARC, the German Aerospace Center DLR, and the Danish research establishment Riso gained experience or even published IC Reports. Like universities, research organisations are challenged by an increased call for transparency and accountability, which has stimulated the adoption of this new instrument. In the course of the reorganisation of the Austrian universities the Ministry for Education, Science and Culture decided to study the potential of IC reporting for Austrian universities in 2001 (Leitner et al, 2001). Based on a research project a conceptual framework for IC reporting has been developed which should meet the specific requirements of universities. In 2002 the Austrian Parliament finally decided that Austrian universities would be obliged in future to publish IC reports (UG, 2002). The new university law defines the contents and structure of such IC reports, which universities will have to publish in 2006 at the latest. So far, two Austrian universities have gained first experience with the new instrument (DUK, 2003; Biedermann et al, 2002). In the present paper, the question of the specifics of IC reporting in universities and its difference from other instruments and methods proposed for universities is addressed. First the model for IC management and reporting planned for the Austrian universities will be described. Then the instrument of IC reporting will be discussed in the context of other management and valuation systems for universities. In universities the question of the valuation of research, education and knowledge-based processes has mainly been treated with various forms of evaluations for years. Moreover, as mentioned. 130

recent performance management systems based on the principles of New Public Management have been proposed for and partly implemented by universities. However, these instruments do not regularly treat intangible resources explicitly. The paper compares in more detail the approach of evaluation, performance management and IC reporting and tries to point out the difference, the potential and the constraints of the new instrument presented. Finally, future research requirements are outlined.

An overview The concept of IC reporting for universities presented here is based on methods developed within industry and on the experience some research organisations have gained over the last few years. The recent developments and applications are briefly illustrated. Since the 1990s industrial firms have been increasingly investing in intangible resources such as training, research, innovation, brand, customer relationships, etc. This changed investment pattern is considered to be the major pillar of the transformation towards a knowledge-based economy (OECD, 1999b). These investments, which are often labelled as investments in intangible assets, knowledge assets or intellectual capital, are increasingly important for the growth and competitiveness of firms and create new challenges for managers and investors. The traditional financial accounting system does not provide sufficient information about these intangible assets for managerial actions to be taken and, according to accounting standards, only a small fraction of all intangible resources can be capitalised. Furthermore, information about the intangible assets of a firm is also important for external stakeholders, especially for shareholders and potential investors. A lack of information about intangible assets leads to information asymmetries between the firm and its shareholders, since the book value does not reflect the real assets and the future earnings potential of industrial firms (Lev, 2001). There is a lively discussion about the definition of intangible assets and intellectual capital. Traditionally, the accounting profession defines intangible assets rather stringently and links them to the property or owner aspect. This understanding is guided by accounting standards, which define how to value intangible assets and question whether they have to be capitalised or made expenses within financial accounts. Because of such a stringent definition, only specific intangible assets that are legally protected, such as patents, can be capitalised according major national and international accounting standards. By contrast, the management research community frequently interprets a wider variety of intangibles relevant for the corporate development and earnings potential as intangible resources and therefore often uses the term 'intellectual capital' (Canibano et al. Research Evaluation August 2004

Intellectual capital reporting

1999). Johanson and Skoog (2001, page 4), for instance, define: 'Intellectual Capital could be said to be the way in which different intangibles and tangible resources interact to produce an organization's output'. Intellectual capital could also be defined as knowledge that can be converted to profit. Thus, the aspect of asset and property is important from the accounting perspective, whereas from the management perspective the existence of intangible resources, which allow the build-up of competitive advantage over a longer period, is important. However, there is still no commonly accepted definition of intellectual capital and some authors use the terms 'intellectual capital' and 'intangible assets' synonymously (see e.g. Teece, 2000). In the literature, various instruments and classifications for IC management and reporting have been proposed, which should deliver new information for managers and investors. The so-called Intangible Asset Monitor by Sveiby (1997), for instance, divides intangible assets into internal structure, external structure and competence. The IC-approach by Edvinsson (1997), originally introduced by the insurance company Scandia, in contrast, divides intellectual capital into human capital and structural capital, such that the latter is again divided into customer capita] and organisational capital. The MERITUM group, a European research group, divides intellectual capital into human capital, structural capital and relational capital (MERITUM, 2002, page 63). Here human capital is defined as the knowledge that employees talce with them when they leave the firm. Structural capital is understood as the knowledge that stays within the firm at the end of the working day. It comprises the organisational routines, procedures, cultures, databases, etc. Finally, relational capital is defined as all resources linked to external partners, such as customers and suppliers. Moreover, there are a number of other classifications, each of which places its emphasis on particular 'groupings' of intangible assets or IC, which clearly overlap to a great extent (e.g. Steward, 1997; Sullivan, 2001). For a more elaborate discussion about the different taxonomies and classifications see Canibano et al (1999) and Andriessen (1993). As yet, there is no commonly agreed classification or standard available. Despite the fact that these authors separate different forms of intellectual capital, they are all similar with respect to the way IC is measured and valued. They propose that, for each form of intellectual capital, financial and non-financial indicators are defined. Indicators such as customer satisfaction, education, IT-infrastructure, training expenses, turnover, etc. are thus collected for a specified period and compared over the years or with other firms. Moreover, they stress that it is important to describe the aims and strategy for the development of intellectual capital, both to make meaningful comparisons but also to allow the selection of relevant indicators. Research Evaluation August 2004

The illustrated instruments for IC management and reporting have been adopted by firms within different industrial sectors, including the financial sector, production and services in the last few years. In the meantime in every European country some innovative firms have not only used these instruments for internal management but also published IC reports to communicate about the development of their most valuable resources. Experiences of firms in various sectors and countries have recently been reported, (e.g. Bukh et al, 1999; Miller et al, 1999). In order to achieve comparable data, guidelines for preparing intellectual capital reports have been proposed at international and national levels. Interesting examples are the Danish Guidelines for Intellectual Capital Accounts (Danish Ministry of Science, Technology and Innovation, 2CX)3) and the MERITUM Guidelines for Managing and Reporting on Intangibles (MERITUM, 2002) at the European level. These guidelines also stress that, besides quantitative assessments, qualitative valuations, narrations and story-telling should also be applied in these reports (Mouritsen et al, 1998). The idea of IC reporting has not gained much attention within the research and science sector, though. So far only research organisations, which carry out applied research and technological development, have published intellectual capital reports. In 1999 the Austrian Research Centers Seibersdorf was the first European organisation to publish an IC report for the entire organisation, followed by the German DLR, which published its first IC report in 2000. Both reports are based on a similar conceptual framework, developed within ARC, which allows at the same time the comparison of some indicators between these two organisations. The Institute of Optics and Fluid Dynamics of the Danish Ris0 research organisation has also gained experience with IC reporting. Experiences with IC reporting in ARC and DLR are described by Koch et al (2000) and Leitner and Warden (2004). Both stress especially that the management benefited by answering central strategic questions when selecting the indicators for their reports. A first list of potential indicators of IC management for human capital, structural capital and relational

Intellectual capital reporting has not gained much attention within the research and science sector; so far only research organisations, which carry out applied research and technological development, have published IC reports

131

Intellectual capital reporting

capital for Spanish research centers and universities is presented by Bueno (2002), who thus adopts the classification for IC proposed by the MERITUM research group (MERITUM, 2002). Biedern:iann et al (2002) report some first results of a successful implementation of an IC report for an institute at the Montanuniversilat Leoben, which is based on the conceptual framework delivered by the new Austrian university law (see next section). In Germany, Maul (2000) dealt with the question of the exploratory power of financial statements, published by certain German universities in the course of the reorganisation in some German Federal States. He argues that the traditional financial account does not deliver sufficient information on the financial situation, the earnings and assets of universities, which is, however, the main task of the annual account according to the German accounting law, the Handelsgesetzbuch. This is obviously due to the huge amount of intangible assets of universities. Therefore, he claims that universities, which have to produce financial statements, should also integrate information about intangible assets within the management report of financial statements.

The role of IC reporting in Austria Based on the concepts and experiences with IC reporting in general and with research organisations in particular, a concept for the application of IC reporting for Austrian universities was developed for the Austrian Ministry for Education, Science and Culture by a project team in the course of the preparation of the new Austrian university law (see Leitner et al, 2001). The reorganisation of Austrian universities revealed a high demand for such a new instrument since universities now have greater autonomy and thus have to take decisions about resource allocation with respect to their tangible and intangible assets. Moreover, the Austrian policy and government is interested in more comprehensive information about the development and effective use of its intangible resources. The reorganisation of Austrian universities is based on the principles of New Public Management with its premises of increased autonomy, output orientation and performance-based funding (Titscher et al, 2000). One important element of the new paradigm is the introduction of performance contracts. These contracts define the duties of both the universities (studies offered, human resources, research programs, co-operation and social goals) and the ministry (funding), and assign a global budget for the duration of three years in Austria. In future the contracts will be drawn up for three years. The funding — which is based on the four criteria of need, demand, performance and social goals — is negotiated between universities and the ministry. It will be partly based on performance indicators and thus up to 20% of the funds will be allocated, depending on

132

the development of selected performance measures over the years. In addition, every year the universities will have to generate a performance report, which is to provide information about the development and achievement of the contract. The main task of the IC research project was to develop an IC model for universities which meets the specifics of their knowledge-production process in the new organisational and legal context of universities. According to the new university law (UG, 2002), the publication of the IC report has to be parallel to the development of the performance contract and the performance report. While the performance report only deals with the topics addressed within the performance contract, the idea behind IC reports is to give universities the opportunity to report on their full range of activities without restrictions. Both reports are generated by the university itself and it is the vice-chancellor (Rektor) who is finally responsible for their proper implementation. IC reporting for Austrian universities should thus fulfil two aims. First, it should provide information for the management of intangible resources. Universities have to decide: whether and how much they should invest in the training of scientists, with whom co-operation should be fostered, which research programs should be emphasised, etc. Thus, in the course of the preparation of an IC report, universities have to discuss their strategies and aims, have to interpret indicators, etc. and therefore learn about their knowledge-production process. Second, IC reports should provide external stakeholders with information about the development and productive use of intellectual capital. Here the ministry should also benefit from a better overview of the development of the national university system and of its strengths and weaknesses in specific fields and thus get information for adapting the national science and education policy. Besides performance contracting and IC reporting, evaluation is the third main instrument for the governance and development of Austrian universities. In Austria, evaluations do not have a long history and were carried out only occasionally in the past. In the course of the reorganisation, evaluations are to be institutionalised for all universities. They have a clear focus on in-depth assessments of the quality of research and education as well as the research staff and are to be carried out every three to five years by external peers. Every Austrian university has to produce its IC report annually by 30 April to the ministry. The ministry will also define which obligatory indicators have to be published by the Austrian universities considering the disciplines. IC reports have to be published for the whole university, but it is up to the university to produce reports on departmental, institutional or disciplinary levels too. Moreover, universities can also publish an IC report for other stakeholders, and address information needs of students or industrial partners. Research Evaluation August 2004

Intellectual capital reporting

Considering the background and principles of the Austrian reorganisation, the new university law finally defined that the IC Report (Wissensbilanz) explicitly has to show: 1. the aim related to society, the self-defined strategy as well as the universities' strategy, 2. the intangible assets, separated into human capital, structural capital and relational capital, 3. the performance processes and output measurements corresponding to the definition within the performance contract (see UG, 2002). This scope of IC reports for Austrian universities is based on the model specified in the next section.

The IC reporting model for Austria The conceptual framework for IC reporting of Austrian universities is based on a model which is to be explained as follows. The model tries to visualise the knowledge-production process within universities and consists of four main elements: the goals, the intellectual capital, the performance processes and the impacts (see Figure 1). Thus, the model conceptualises the transformation process of intangible resources when carrying out different activities (research, education, etc.) resulting in the production of different outputs according to specific and general goals. Like other IC models, especially applied by research organisations, this model can be labelled as having a process-oriented approach. The IC model not only focuses on the different forms of intangible assets or intellectual capital but also on the questions as to how these investments are used by the university and how they infiuence the outputs. A similar process logic is also used by the research organisations ARC and DLR within their IC reports, which served as one reference model for the Austrian IC university mode! (Leitner and Warden, 2004). In the case of universities and research organizations, the

Intellectual capital

Human capital

Structural capital

integration of the output dimension is highly relevant since their outputs are also knowledge-based by nature or even intangible assets themselves, such as patents or new technologies developed. This logic is also similar to that of models of innovation and research processes developed by the innovation and evaluation literature, which frequently separate inputs, processes and outputs (Rothwell, 1994; Dodgson and Hinze, 2000; Roessner, 2000). The logic of the model can be described as follows. The development of IC is guided by political goals set by the ministry - which, in turn, are based on the Austrian science and education policy — but also by the organisational goals defined by the universities themselves. IC or intangible resources are interpreted as the input for tbe knowledgeproduction process within universities. Thus, IC is understood to be all forms of intangible resources that deliver the knowledge base for carrying out its tasks and allow it to enhance its future competitiveness. IC is thereby not restricted to accounting rules for its recognition as an asset. Three elements of intellectual capital are differentiated: human capital, structural capital and relational capital. In the context of universities, human capital is the knowledge of the researchers and non-scientific staff of the university. Structural capital comprises the routines and processes within an university, an often neglected infrastructure of theirs. Relational capital comprises the relationships and networks of the researchers as well as the entire organisation. The model thus adopts the widely diffused classification also proposed by the MERITUM group (MERITUM, 2002, see above), which is obviously of relevance for universities. The IC model differentiates six performance processes which can be reduced or enlarged during the implementation, depending on the specific profile and type of university (e.g. universities for arts, technical institutions, business schools). Apart from scientific research and education — which are the kernel of university activities — training, commercialising of research, knowledge transfer to the public.

Performanceprocesses

Impact

Research

Stakeholder:

Education

Ministry

Training

Students

Commercialising of research Knowledge transfer to the public

Relational capital

Industry Public Science

Services

Community

Infrastructure

etc.

Figure 1. Model for IC reporting of Austrian universities Source: Leitner et al, 2001

Research Evaluation August 2004

133

Intellectual capital reporting

services and infrastructure services are separated. The latter ones can be summarised as the third mission of the modem university (OECD, 1999a). These elements are mainly captured with process and output measures. In the category of impact, the achievements of the performance processes are assessed. The different stakeholders of the universities addressed are the scientific community, students, citizens, industry, etc. Naturally, these are the most difficult elements to evaluate by quantitative data. The model thus divides into outputs and impacts but does not include outcomes, as frequently defined within the evaluation literature (Turpin et al, 1999). As in the case of industry, the different elements of the model will be measured and assessed by financial and non-financial indicators, as well as by qualitative information and valuations. Within the project a list of indicators was developed and proposed (see the list of proposed indicators in the Appendix). The definition and selection of indicators by the research team was based on: 1. the set of measures used in the past within Austrian universities, 2. proposed indicators within the intellectual capital literature, and 3. the findings of the evaluation research.

will benefit from the additional information provided for their decisions. Based on the indicators also rankings of universities will become possible, which have so far not been prepared in Austria. When applying the IC model, universities explicitly have to formulate specific goals. Carrying out this task produces important benefits for Austrian universities. For instance, although scientific research and education are equally important for all universities, universities have the choice of setting certain priorities regarding their performance processes. This can be represented, for instance, as the opposite ends of a continuum. On the one hand, for example, they can focus purely on basic research or, on the other, carry out applied research. The latter is becoming increasingly important and can, in turn, deliver even new fundamental research questions. Universities can co-operate with industry or public agencies to solve concrete problems or commercialise their research findings via spin-offs. Universities may choose to educate only the best students and provide an elite education, while others provide education for the masses. Thus, through the provision of information about strategies and aims it is also possible to consider specific missions and aims of universities when comparing data between universities.

Evaluation and performance management Even though most indicators are non-financial by nature, some indicators are financial figures. Obviously, the financial assessment of outcomes is the most difficult one. In the case of commercialising, the amount of licensing and the sales generated by spin-out firms, are possible answers for that problem. The indicators, about 50 to 100, that all Austrian universities will have to publish will be defined by the ministry by an additional decree in 2004.' There will be a well-defined set of indicators for all universities; some of which will have to be published at the university level, some for each discipline (e.g. chemistry, philosophy, architecture, etc.). However, it is up to the university to publish additional indicators that might be of interest for the external stakeholders and reflect their specific competencies, aims, achievements or performance potential. Moreover, universities might — and probably will — use additional indicators or disaggregated data for internal management, especially for internal resource allocation decisions. The advantage of the obligation to define indicators is the possibility of making comparisons between different disciplines but also between whole universities. This will noticeably increase transparency throughout the Austrian university system. However, there will not be specific indicators that universities will have to publish for specific groups of external stakeholders, such as funding agencies, research councils, etc. Thus, not only the ministry but also international and national funding agencies, co-operation partners, industrial firms, students, etc.

134

Intellectual capital management and reporting as proposed here is a new instrument for university management and the governance of universities by ministries and policy. Like other recently proposed instruments for the management of public institutions, it should enable decision-making within the organisation in a context of increased autonomy. The most important instruments for universities differentiated and discussed in recent years are various forms of evaluation and performance management systems. IC reporting deals with similar tasks and will be illustrated in contrast to these two instruments by several criteria in the following section. In addition, experience and studies about the use of performance management systems and evaluations might also deliver useful information for designing IC management systems. Evaluation approaches Evaluation approaches have the task of assessing the effects, impacts and efficiency of publicly funded institutions and programs. They were first applied in the USA in the 1960s for the assessment of different public programs (regional development, employment, health, etc.). Since then evaluation approaches have conquered various public sectors and have been applied to assessing universities' research and education outputs. Evaluation aproaches are linked to the long debates about how to value public spending and how to estimate the social and economic effects Research Evaluation August 2004

Intellectual capital reporting

of public programs and public research expenditure. While the USA, the UK and the Netherlands soon started to evaluate the research and education provided by universities, in other European countries the use of these concepts started quite late and the application and extent of evaluation still varies considerably across Europe. In most countries, evaluations are primarily carried out for institutes and departments or specific research programs, seldom for an entire university. In general, evaluation should deliver information for public organisations and administration with the aim of solving or overcoming identified problems and weaknesses. In addition, some authors argue that evaluation should also deliver information for the general public and thus meet the demand for social accountability (Richter, 1991). Evaluation is also seen as an instrument to promote reform and selforganisation witbin universities. However, in practice, it often fails to establish feed-back loops because, frequently, the reports are not discussed intensively across the evaluated institution (Blalock, 1999). Modem evaluations have the task of providing information for resource allocation. In practice, evaluation is partly related to resource allocation decisions. While in the UK evaluation is directly associated with government resource allocation, in other countries such as the Netherlands evaluation reports are to a stronger extent aligned to support research groups and universities for their management tasks. Evaluations are based on different methods: quantitative tests, qualitative approaches, observations, bibliometry, self-assessments and benchmarking (Turpin et al, 1999). Despite the use of quantitative methods and data, most evaluation reports are still qualitative by nature. Because of their complex nature and diverse impacts, evaluations of public initiatives and programs are restricted to qualitative analyses. Evaluations are usually executed by external experts. Yet internal evaluations have also gained popularity, especially because they are better suited to enhance learning within the organisation. When evaluating research and education, the goals have to be defined explicitly. However, goals of institutions and programs are often manifold, vague or even contradictory. A crucial task is thus the definition of valuation criteria, which are related to the goals and interest groups. Considering the specific aims, a major challenge for evaluations is the common definition of the output, which is especially difficult in the case of education and research, e.g. 'What is a good education?' or 'What are basic skills and learning capabilities?' This definition is, in turn, dependent on the national and political context of education. The problems discussed above also lead to the debate as to whether evaluations should generate recommendations. Rossi et al (1988), for instance, argue that evaluation reports should deliver only Research Evaluation August 2004

Government legislation is the driving force for the systematic performance of evaluation and the definition of measures and indicators through GPRA in the USA and the Office of Science and Technology in the UK

data, whereas the assessment should be done by the contracting authority of the evaluation. Another variant is that the evaluator assesses on the basis of the goals that have been officially articulated. Here it is important for the evaluators to work out the goals considering the context of the entire program or institution. There is rich literature on research evaluation and the innovation process that deals with the selection and use of indicators for evaluations, which is obviously also of interest in the context of IC reporting. Important issues refer to the use of publications or patents to measure research output or universityindustry relationships (e.g. Dodgson and Hinze, 2000; Schartinger et al, 2001). For instance, the limits of the use of the indicators 'number of publications' or 'patents' are widely recognised. The mere counting of publications is quite problematic since the quantity can hardly ever be equalled with the quality of publications (K'leser et al, 1998). Regarding the organisation of and responsibility for evaluations, in many countries national agencies have developed guidelines for their preparation. In the USA and the UK, government legislation is the driving force for the systematic performance of evaluation and the definition of measures and indicators: GPRA in the USA, the Office of Science and Technology in the UK, whereas in the Netherlands the responsibility for evaluation has been decentralised and delegated to intermediary organisations, such as funding foundations or university associations. The concept of performance management In the 1980s, especially in Anglo-American countries, public organisations such as hospitals, schools and public agencies started to adopt managerial instruments already successfully used in the private sector, and more particularly performance management systems (Lindgren, 2001). Some proponents even explicitly state that they desire to make the public sector more like the private sector with respect to accountability (Ittner and Larcker, 1998). The concept of New Public Management highlights this broad movement. Here the idea of output and outcome orientation is central: resource allocations are based on goals and performance measures achieved.

135

Intellectual capital reporting

Performance management is the process of defining goals, performance categories and performance standards and the reporting and communication to the public owners, according to the National Academy of Public Administration (cited in English and Linquist, 1998). Thereby, initial objectives for programs and institutions have to be defined, for which in tum relevant indicators have to be developed. This process allows the assessment of the achievement of objectives. A crucial activity in designing performance management systems is the development of performance measures, which includes measures for outputs and outcomes, sometimes also for inputs and processes. This leads to the necessity of defining outputs as well as products. As discussed above, the definition and measurement of outputs of universities is problematic. If the inputs and outputs are quantified by indicators, it is also possible to link inputs and outputs and to develop efficiency measures (Davies, 1999). In addition, with these indicators the allocation of resources can be carried out. In various European countries some universities have also started to use contracts to allocate the funds in accordance with performance indicators (Ziegele, 2001). As stated above, this idea was also taken up by the Austrian Ministry for Science and Education during the reorganisation of the universities. Basically, performance management systems use financial as well as non-fmancial indicators with the main aim of defining the outcome and they collect quantitative measures, often founded on IT-based systems or management information systems (MISs). In practice, however, they often produce simple statistics to answer critical planning and managerial questions, instead of conducting more complex data analyses. Thus, they are often criticised as using quick-turn-around data in internal automated MISs that deliver relatively simple comparisons of gross outcomes against performance goals (Blalock, 1999). Though criticised for the fact that the true complexity of programs and institutions is not captured by

performance management systems, proponents argue that these systems can free agencies and staff from rigid regulations (Cunningham, 2000). Furthermore, the definition of performance measures is consistent with program or organisational goals and hence supports the strategic development process at all levels of an institution. Hereby one important challenge for public organisations is the evolution of the use of outcome instead of output indicators (Cunningham, 2000). A widely recognised problem with performance management is 'goal displacement', which occurs when performance management creates incentives that direct effort towards meeting the requirements of measuring and reporting and not towards the overall aims of the institution (Davies, 1999). Critical factors for successful implementation are top leadership support, the personal involvement of senior management, the participation of relevant stakeholders and clarity about how the performance information will be used (Greene, 1999). Cunningham (2000) found that the commitment and involvement of all actors as well as integration in the strategic planning process is critical. The benefits arise rather from the process itself than from the end result of the performance reporting activity. In particular, the implementation of performance management systems requires communication about performance that has not been presented before. Thus, it is rather the communication that leads to benefits than the performance information itself. IC reporting, evaluation and performance management

The three instruments presented here have different aims and scopes of applications, which clearly overlap to some extent. In the following the three instruments will be discussed according to their different features and aims (see Table 1 for an overview). Regarding the approach and focus of the instruments, evaluation is the scientific-oriented approach with in-depth analysis, carried out at certain points

Table 1. Comparison of evaluation, performance management and IC reporting

Performance management

External evaluation

IC reporting"

Focus

Measuring results via indicators

Approach/methods

Performance indicators, gathered via information systems Mainly based on information systems and databases

Assessment of efficiency and effectiveness Scientific-oriented and in-depth Tailored to the specific goals Various quantitative and often qualitative reports

Data collection

Continuous Based on information systems Partiy

Periodically (3-5 years) or at discrete points Major aim

Intangible resources and their management Indicator-based Obligatory indicators, but flexible for an adaptation of indicators Narrations Reports Periodically, preferably based on information systems information for policy decisions

Deliver information for evaluations Yes, based on indicators No Organisation (unit)

Restricted Yes, mainly in the USA Partly National agencies

Deliver information for evaluations Possible No Organisation (unit)

Support shareholder/public authority Internal management support Resource allocation Recommendation Responsibility Note:

136

' as conceptualised for Austrian universities

Research Evaluation August 2004

Intellectual capital reporting

of time, while performance management systems are characterised by a pragmatic approach, primarily based on output indicators, derived from defined goals. In practice, both are often implemented simultaneously. According to the new Austrian university law. evaluations will be carried out every three to four years and will be in-depth evaluations of research and education by external peers. IC reporting for universities, as presented in this paper, is a tool that encloses the entire knowledge-production process within universities, with the aim of generating information for management decisions. It is mainly based on indicators which are categorised in a model. In addition, it also incorporates qualitative information, which should especially express the complex nature and interdependencies between driving factors and results. By contrast to performance management and evaluation, IC reports explicitly focus on intellectual capita] and hence enlarge existing input and output categories of performance management systems. In this context, particularly structural capital has to be considered as a blind spot within universities. The IC model presented in this paper highlights the importance of intangible resources for universities. Furthermore, whereas IC reporting relies on quantitative as well as qualitative valuations, performance management systems often focus on only a few indicators. All three instruments apply different kinds of qualitative and quantitative methods to value research and educational processes at universities. Although evaluations more often rely on qualitative methods, in recent years different kinds of indicators have been increasingly integrated. Indicators have already become an integral part or method of certain evaluations, especially in the USA, where data about the financial resources, student satisfaction or academic reputation, partly gathered by questionnaires, is aggregated via a standardised procedure. Moreover, the classification of input, processes and results is commonly used in all three instruments. Stuffiebeam (1983), for instance, proposes that evaluations have to analyse: 1. the context, e.g. What are the aims addressed with the program? 2. inputs, e,g. human resources and tangible resources, 3. processes, the activities by the program or institutions, and 4. products, the results of the program. With respect to the measurement of results, the separation into output, outcome and impact is frequently used in evaluations (Turpin et al, 1999). However, the latter two are sometimes used synonymously. In general, output refers to the routine products of research activities, such as publications, conference papers, training courses, degree, etc. Outcome means the achievements of the activity, such as new theories, new devices or analytical techniques.

Research Evaluation August 2004

Impact is a measure of the influence or benefit of the research outcome or output, either within the research community itself or the wider society. It thus includes the social, economic or environmental benefit. The assessment of impacts is explicitly treated in the IC model. The aspect of learning and management support is addressed differently by the three instrutnents. Performance management systems document outcomes, but to a lesser extent give information to understand the complex nature of knowledge-production. They collect data on a continuous basis, which tends to rely on easy-to-obtain quantitative measures that are conditioned by cost acceptance. In the field of evaluations, internal evaluations have a strong focus to increase the performance and efficiency by internal discussions. IC reports should also enhance the academic discussion within the faculty board (dean) and the university's governing boards and should therefore support the strategic development process. The linking of performance measurement to budgeting is a global phenomenon and is frequently treated within performance management systems. In Austria the allocation of financial resources will be based on the performance contract (see above). However, IC reports have the potential to support the development of these contracts. Especially within universities, the management could gather some data on the departmental level and thus allocate budgets internally according to the specific criteria defined by the university itself. Finally, as concerns the responsibility for accomplishing the instruments, traditional evaluations are carried out by an external expert team which, based on different methods, generates data, interprets it and writes a report. Internal evaluations are produced by the members of the organisation. Performance management is implemented within the organisation and administered by the top management or administration. IC reports are produced by the members of the organisation, as is the case in industry. Probably, in the future, external authorities will control the process and ensure the quality of the production, for instance by auditors or ministry agents.

Discussion and outlook The presented instrument for measuring and valuing intellectual capital and its productive use in universities is a comprehensive approach, which addresses communicating and management issues simultaneously. Considering the long history of assessing results of research and education at universities, the idea of measuring and reporting knowledge-based resources and results with a specific instrument is not new. However, IC reporting as conceptualised here focuses on the identification of various forms of intellectual capital and tries to link them to the outputs of the universities. IC reports separate different forms of intellectual capital, define specific indicators, and link

137

Intellectual capital reporting

these to the knowledge-production process of a university. The underlying thesis is that a proper management of IC at universities has a significant impact on the performance and efficient use of the invested financial funds. Although performance measurement systems, management accounting instruments and evaluations also use indicators to some extent for input resources, these instruments do not explicitly deal with IC, nor do they cope with the whole performance and output range of a university. In addition, the IC model as presented here visualises the relations between inputs, processes and outputs of a university by a functional model. Even though the model as proposed is simply abstracting a linear model of knowledge-production at universities, it should enable structured discussions within management and fits the requirements for external reporting for Austrian universities. According to the IC movement, intangible resources are considered as the most valuable resources of a university. Hence, scientific productivity is determined not only by intrinsically motivated scientists but also by the general organisational incentives and context. The university is more than the number of scientists; here structural capital matters too. A recent study, for instance, demonstrates that institutional and organisational factors of research institutes infiuenced by the management (disciplines, administrative processes, leadership, etc.) are of high relevance for explaining research outputs (HoUingsworth and Hollingsworth, 2000). Another empirical study by Teodorescu (2000) illustrates that organisational and input factors or variables determine the performance of universities. Teodorescu, for instance, shows that the number of conferences and membership of societies are positively correlated with publication output. IC reports can foster the establishment of an organisational structure and culture within universities, which is increasingly important for many universities in the new competitive environment. The Austrian IC reporting model provides information for internal and external stakeholders of universities. Since the indicators that universities will have to publish in the future will be defined by the ministry, comparisons and benchmarking across Austrian universities will be possible and information for in-depth evaluations is available. Not only the ministry but also other funding agencies, industrial firms, students, etc. might benefit from the published data. These stakeholders get information not only about the past performance but also about the IC of a university, which is a good indicator of its future performance potential. This is also of interest for funding agencies that spend their research grants based on peer-reviewed research proposals. The disclosure about intellectual capital goes beyond the information addressed in research proposals, since the latter usually cope only with research questions, methodology, individual researchers or the team and past research tracks. With additional data about the 138

organisational capital and capabilities of a university, it is possible to assess whether organisational factors — such as the infrastructure, management routines, administrative processes, relationships, synergies between institutes, etc. — might support the successful realisation of a research project or programme. Clearly, in the future, the reviewers and funding agencies will have to learn how to interpret this new kind of information. Finally, based on the published data, the ministry gets important information for the formulation of science and education policy. The ministry can rank or benchmark universities and get information about the strengths and weaknesses of the science system. As the indicators will be selected, if possible, according to commonly accepted definitions or conventions in the science community, comparisons on the international level might become possible in the future. Thus, the establishment of this instrument fosters learning on the organisational level as well as stimulating transparent competition and can clearly be understood as a new governance mode. Generally, the valuation of IC measures is dependent on the specific goals and the regional, national and cultural context of the university or organisational unit. IC reports can be developed for an entire university, for institutes, departments and research programs. An interesting question, for instance, is the possibility of valuing and comparing the performance of a research-oriented institute or university with a more training-oriented institute or university. This also holds true for education, since the meaning and value of education is contingent on the cultural context. In the German Humboldtian tradition, for example, education is referred to as 'scientific', by the French, education is regarded as 'professional', which also explains France's phenomenon of the ^grandes ecoles', finally, the AngloAmerican tradition stresses the 'liberal' nature of education (Scott, 2002). These few examples already point out the necessity for IC reports to deliver contextual information and information about their organisational aims and missions, but also show the limits of comparisons. Like all measurement and management systems that deal with knowledge-based processes, IC reports are faced with the methodological problem of measuring 'soft', non-physical processes and outputs. There is a long history and vast literature within the literature on evaluation research and knowledge and innovation economics that deals with the measurement of scientific and educational processes (see, e.g. Roessner, 2000; Dodgson and Hinze, 2000; Machlup, 1980). Those implementing and running an IC system could learn from the experience accumulated so far with similar systems in various sectors and fields. Due to the limitations of a purely indicator-based approach, IC reports of universities, similar to those in industry, should thus also integrate qualitative methods (best practice, narration, etc.). Here, for instance, the connection between research and education can be illustrated, the Research Evaluation August 2004

Intellectual capital reporting

synergies and flows between the different elements of the IC model can be shown, and the cultural context explained, as mentioned above. Evaluation, performance management and IC reporting overlap to some extent, but offer different solutions for similar problems. The latter are partly substitutes for the former because the organisation and its performance can be analysed by internal and external stakeholders without the help of the evaluators. The question of the balance of internal selfassessment versus external evaluation is a crucial one for further development: IC reporting has to be linked to this question. There is obviously a demand for the further theoretical investigation of the synergies of the different approaches and for linking the three disciplines: evaluation of research, performance measurement and IC theory. What are possible development trajectories for the future? The presented functional model for application in Austrian universities is a first step. It is flexible enough for individual adaptations and adjustments and has the potential for further improvement and elaboration. Furthermore, through the definition of categories and defined indicators, comparability should be made possible on the international level. Although the various national university systems still vary considerably, this instrument, which is flexible for modification and improvement, addresses some general issues relevant for all universities, inde-

pendent of their national specifics. Finally, statistical analyses such as data envelopment analysis or bibliometric methods can be used to analyse and interpret the data provided by IC reports. On the whole, there is so far no common model or standard available for valuing intangible assets and preparing IC reports for universities at the international level. In many countries national agencies have developed guidehnes for the preparation of evaluations and have partly also defined some indicators, which have to be linked to and co-ordinated with the further development within the IC movement at universities. In addition, work carried out at the national and international level regarding IC management for industrial firms provides valuable information for the development of specific guidelines for the application within universities. Considering the potential of IC management and reporting, the European Association of Research Managers and Administrators (EARMA) in collaboration with the European Centre for the Strategic Management of Universities (ESMU) launched the initiative IC in HEROs (intellectual capital in higher education institutions and research and technology organisations) in 2002 with the objective of raising awareness and disseminating good practice in the fields of managing and reporting IC among universities and research organisations. Such an initiative would also enhance the possibility of comparing performance data at the international level.

Appendix. Exemplary indicators for Austrian universities Human capital Number of scientific staff totai Number ol scientific staff totai (empioyed) Number of fuii-time professors Number of student assistants Fluctuation of scientitic staff (as percentage of aii scientific staff) Fluctuation of scientific staff (not empioyed) (as percentage of totai scientific staff [not employed]) Percentage growth of scientific staff Percentage growth of scientific staff (not empioyed) Average duration of scientific staff Expenses for training Structural Capital Investments in library and electronic media Reiational Capitai Research grants abroad (as percentage of scientific staff) international scientists at the university (total in months) Number of conferences visited Number of empioyees financed by non-institutional funds Number of activities in committees, etc. Hit rate European research programs New co-operation partners

1.

For the full list of proposed indicators, see Leitner et al

Research Evaluation August 2004

Research Publications (referred) Publications (proceedings, etc.) Publications total Number of pubiications with co-authors from the industry PhDs Non-institutional funds (contract research, etc.) Education Graduations Average duration of studies Teachers per student Drop-out ratio PhDs and master theses finalised Commerciaiising Number of spin-offs Employees created by spin-offs income generated from licences Knowledge transfer to the public Hits on Internet site Lectures (non-scientific) Services Measurement and iab services and expert opinions Leasing of rooms and equipment

2.

(2001) or See

139

Intellectual capital reporting

References D Andriessen (1993), Making Sense of Intellectual Capital (Butterworth-Heinemann), Austrian Research Centers Seibersdorf (2000), intellectual Capital Report 7999 (Seibersdorf). H Biedermann, M Graggober and M Sammer (2002), 'Die Wissensbilanz als Instrument zur Steuerung von Schwerpunktbereichen am Beispiei eines Universitalsinstitutes', in M Bornemann and M Sammer (editors), Wissensmanagement: Konzepte und Erfahrungsberichte aus der betrieblichen Praxis (DUV Verlag, Wiesbaden). A B Blalock (1999). 'Evaluation research and the performance management movement', Evaluation, 5(2), pages 117-149, E Bueno (2002), 'IC and scientific production of the Madrid research centers', paper presented at the conterence, 'The Transparent Enterprise: The Value of Intangibles', 25-26 November, Madrid. P N Bukh, H Larsen and J Mourltsen (1999), 'Developing intellectual capital statements: lessons learned form 23 Danish firms', paper presented at the OECD International Symposium on 'Measuring and Reporting Intellectual Capital: Experiences, Issues, and Prospects', OECD, 9-10 June, Amsterdam. L Canibano, M Garcia-Ayuso and M P Sanchez (1999), 'Accounting for intangibles: a literature review'. Journal of Accounting Literature. 19, pages 102-130. G Cunningham (2000), 'Towards a theory of performance reporting in achieving public sector accountability: a field study', paper presented at the Annual Meeting of the British Accounting Association, Danish Ministry of Science, Technology and Innovation (2003), Intellectual Capital Statements: Tiie New Guideiine (Copenhagen), I C Davies (1999), 'Evaluation and performance management in government'. Evaluation, 5(5), pages 150-159. Deutsches Zentrum fur Luft- und Raumfahrt DLR (2001), Intellectual Capital Report 2000, Koln, M Dodgson and S Hinze, S. (2000), 'Indicators used to measure the innovation process: defects and possible remedies". Research Evaluation. 8(2), pages 101-114, DUK (2003), Wissensbilanz 2002, Danube University Krems, Krems, LEdvinsson (1997), /nfe//ecfua/Cap/(a/(Skandia, Stockholm). J English and E Lindquist (1998), Performance Management: Unking Results to Public Debate (Ottawa). J C Greene (1999), 'The inequality of performance measurements'. Evaluation, 5(2), pages 160-172. J R Hollingsworth and E J Hollingsworth (2000), 'Radikale Irinovationen und Forschungsorganisation: Eine AnnSherung', dZG. 11(1), pages 31-66. C D IHner and D F Larcker (1988). 'Innovations in performance measurement: trends and research implications'. Journal of Management Accounting Research, 10, pages 205-238. U Johanson and M Skoog (2001), The Relevance of Measuring and Reporting on Intangibles in Small- and Medium-Sized Enterprises: Does it Facilitate Entrepreneurship? (School of Business. Stockholm University), A Kieser (1998), 'Going Dutch: Was lehren niederiandische Erfahrungen mit der Evaluation universit^rer Forschung", Die Betriebswirtschaft. 58(2), pages 208-224. G Koch, K-H Leitner and M Bornemann (2000), 'Measuring and reporting intangible assets and results in a European contract research organisation', paper prepared for the Joint GermanOECD Conference, 'Benchmarking Industry-Science Relationships', 16-17 October, Berlin, K-H Leitner, M Sammer, M Graggober, D Schartinger and C Zielowski (2001), Wissensbilanzierung fur Unlversitaten. Auftragsprojekt fur das Bundesministerium fijr Bildung, Wissenschaft und Kunst (Wien) K-H Leitner and C Warden (2004), 'Managing and reporting knowledge-based resources and processes in research organisations: specifics, lessons learned and perspectives', Management Accounting Research, 15(1), pages 33-51. B Lev (2001), intangibles. Management. Measurement, and f?eport/ng (Brookings Institution Press, New York), L Lindgren (2001). 'The non-profit sector meets the performancemanagement movement'. Evaluation. 7(3), pages 285-303, F Machlup (1980), Knowledge: its Creation. Distribution, and

140

Economic Significance. Vol, 1: Knowledge and Knowledge Production (Princeton University Press, Princeton). K-H Maul (2000), 'Wissensbilanzen als Teil des handelsrechtlichen Jahresabschlusses', Deutsches Steuerrecht. 38(4), pages 2009-2016, MERITUM Project (2002), Guidelines for Managing and Reporting on Intangibles (Intellectual Capital Report, Madrid). M Miller, B DuPont, V Fera, ef a/(1999). 'Measuring and reporting intellectual capital from a divers Canadian industry perspective', paper presented at the OECD International Symposium on 'Measuring and Reporting Intellectual Capital: Experiences, Issues, and Prospects', 9-10 June, Amsterdam, J Mouritsen, H T Larsen, P N D Bukh (1998), Intellectual Capital and the 'Capable Firm': Narrating. Visualising and Numbering for Managing Knowiedge (Copenhagen Business School and Aarhus School of Business), OECD (1999a), University Research in Transition (OECD, Paris). OECD (1999b), The Knowiedge-based Economy: Facts and Figures (OECD, Paris). R Richter (1991), 'Oualitatsevaluation von Lehre und Forschung an den Universitaten der Niederlande, Eine Bilanz der letzten 10 Jahre', in W Weber and H Otto (editors), Der Ort der Leher in der Hochsctiule (Weinheim). Rise National Laboratory Optics and Fluid Dynamics Department (1999), Intellectual Capital Accounts (Roskilde), A Rodriguez, J Landeta, and S Ranguelov (2002). 'R&D&T capital at the universities: what types of knowledge drive it?', paper presented at the Conference, 'The Transparent Enterprise: The Value of Intangibles', 25-26 November, Madrid, D Roessner (2000), 'Quantitative and qualitative methods and measures in the evaluation of research'. Research Evaluation, 8(2), pages 125-132. P Rossi, H E Freeman and G Hofmann (1988), ProgrammEvaluation. EinfOhrung in die Metiioden angewandter Sozlaiforschung (Stuttgart), R Rothwell (1994), 'Industrial innovation: success, strategy, trends', in M Dodgson and R Rothwell (editors). The Handbook of Industrial innovation (Edward Elgar, Cheltenham). Salamanca (2001), The European Higher Education Area: Joint declaration of the European Ministers of Education, convened in Bologna on 19 June 1999, P Scott (2002), 'The future of general education in mass higher education systems'. Higher Education Policy, 15, pages 6 1 75, D Schartinger, A Schibany and H Gassier (2001), 'Evidence of interactive relations between the academic sector and Industry', Journal of Technology Transfer. 26(3), pages 255-268, A Schenker-Wicki (1996), Evaluation von Hochschulleistungen. Leistungsindikatoren und Performance Measurement (Wiesbaden), T Steward (1997), intellectual Capital: The Wealth of Organizations (London), D L Stufflebeam (1983), 'The CIPP model for program evaluation', in G F Madaus, M S Scriven and D L Stufflebeam (editors). Evaluation models: Viewpoints on Education and Human Service Evaluation (Boston). P Sullivan (2001), Value-driven Intellectual Capital (John Wiley & Sons, London), K E Sveiby (1997), The New Organizationai Wealth: Managing and Measuring Knowledge-Based Assets (Barrett-Koehler, San Francisco). D Teece (2000), Managing Intellectual Capital (Oxford University Press, Oxford), D Teodorescu, D, (2000), 'Correlates of faculty publication productivity: a cross-national analysis'. Higher Education. 39(2), pages 201-222. S Titscher et al (editors) (2000), Universitaten im Wettbewerb (Munchen). T Turpin, S Garrett-Jones, D Aylward et al (1999), Valuing University Research: International Experiences in Monitoring and Evaluation Research Outputs and Outcomes. Australian Research Council and Centre for Research Policy (University of Wollongong), UG (2002), Bundesgesetz uber die Organisation der Universitaten und ihre Studien (Universitatsgesetz 2002) (Wien), F Ziegeie (2001), Akademisches Controlling: Theoretische Grundlagen, Ziele, Inhalte und Ergebnisse. Kooperationsprojekt der Technischen Universitat Munchen und des CHE Centrum fur Hochschulentwicklung (Munchen, Gutersloh).

Research Evaluation August 2004

Valuation of intangibles

demand by the 'owners' and citizens for transpar- ency regarding the .... cording to accounting standards, only a small frac- tion of all intangible ..... technical institutions, business schools). Apart from scientific ..... Hits on Internet site. Lectures ...

5MB Sizes 56 Downloads 104 Views

Recommend Documents

[Ebook] pdf Intangibles: Management, Measurement ...
intangibles, including the major economic principles governing intangible investments, ... information systems, and recommendations for improved accounting ...

List of Valuation Camps.pdf
Page 1 of 5. Sl. No. School. Code Name of Camp Subject/Paper DEO. 1 44035 GBHSS Neyyattinkara. 0471- 2222434 Mathematics Neyyattinkara. 2 43083 SMV GHSS Tvpm. 0471-2330395 Hindi. 3 43084 GMHSS Thycaud. 0471-2323641 Malayalam I. 4 43079 Govt. Girls HS

Economic Valuation of Terrestrial Ecosystem Services.pdf ...
Economic Valuation of Terrestrial Ecosystem Services.pdf. Economic Valuation of Terrestrial Ecosystem Services.pdf. Open. Extract. Open with. Sign In.

Valuation rules.pdf
(1) Where a new phone is supplied for Rs.20000 along with the exchange of an .... (4) The value of supply of services in relation to life insurance business shall be: ... (7)The value of taxable services provided by such class of service providers ..

Farmers' Subjective Valuation of Subsistence Crops: The Case of ...
The Case of Traditional Maize in Mexico by .... Change and Sustainability in Rural Mexico (PRECESAM) for letting me use their unique data set. Warmest thanks to my dear family for always believing in me, and all my friends in Davis ... of subsistence