Implementing and using Quality Assurance

The idea of a European Quality Assurance Forum was proposed by EUA to the “E4 Group” (ENQA, ESU, EUA, and EURASHE) in 2003. This group has been meeting regularly since September 2001 to discuss ways to develop a European dimension for quality assurance. It was responsible for developing the “Standards and Guidelines for Quality Assurance in the European Higher Education Area” and for designing the structure and processes of the European Register for Quality Agencies. The ? rst European Quality Assurance Forum took place in 2006 at the Technical University of Munich and focused upon internal quality processes.

The second Forum, hosted by the Sapienza Universita di Roma, was focused upon “Implementing and Using Quality Assurance: Strategy and Practice” and attracted over 500 participants: academics, QA agencies and students. Thus, by the time registration for the second Forum closed, it became clear that this event had become the premier conference for quality discussions in Europe. This publication gathers a representative sample of the contributions to the Forum. Some of the keynote presentations are included as well as a few of the many excellent papers that contributed to lively discussions in the parallel sessions.

The keynotes discuss quality from a conceptual, historical and policy perspective. The papers are mostly focused on institutional case studies and show the variety of ways that higher education institutions and QA agencies ensure quality. The Forum Organising Committee hopes that this collection of papers will inspire higher education institutions, academic staff, students and QA agencies to re? ect upon ways that quality can be ensured while respecting the need for diversity and innovative practices in research and education.

On behalf of the Forum Organising Committee, I wish to thank the following for their support of this activity: the Sapienza Universita di Roma which hosted the Forum with a great sense of organisation and hospitality, the 70 authors who submitted papers to the Forum, the Socrates Programme which funded it partially, and Harald Scheuthle, EUA, who spearheaded the organisation on behalf of the E4. The European Quality Assurance Forum will be offered again on 20 – 22 November 2008 at Corvinus University in Budapest and will focus on an examination of current trends in quality assurance.

We hope to repeat the success of the ? rst two Forums and look forward to welcoming you then. Henrik Toft Jensen Chair, Forum Organising Committee 4 1. INTRODUCTION Looking back – looking forward: Quality assurance and the Bologna process Sybille Reichert1 Quality assurance is so omnipresent and its vocabulary so pervasive nowadays in higher education policy and discourse that one forgets how relatively recent the enthronement of the term “quality” actually is.

Hence, before embarking on an attempt to trace the key paths and challenges which quality assurance will be facing in the years to come, it may be helpful to put the concern with assuring quality in higher education into context. This should not just be a historical exercise, of course, but should also serve to emphasise that quality development in higher education is a great deal more than the formal quality assurance processes that policymakers like to focus upon when they speak about quality in higher education.

Clearly, quality enhancement is the sum of many methods of institutional development, ranging from competitive hiring procedures, creating appropriate funding opportunities, to facilitating communication between disciplines and supporting innovative initiatives through institutional incentives. The Bologna reforms may serve as a good case in point: while quality assurance is an important part of the Bologna reforms, the latter’s relevance to quality goes far beyond the con? nes of quality assurance alone.

Seen from their bright side, the Bologna reforms could improve quality in multiple ways: through the opportunities they offer to re? ect and review curricula, to reform teaching methods (student-centred learning, continuous assessment, ? exible learning paths) and even through strengthening horizontal communication and institutional transparency. Putting quality assurance into context thus means looking at quality concerns before Bologna, through Bologna, and beyond Bologna. Only then will we understand the value of quality assurance in Bologna and the conditions of its successful realisation at universities.

Before Bologna, higher education debates in the 90s were characterised by multiple national debates on quality problems in higher education, largely due to the effects of under-funded massi? cation. Complaints about overcrowded classrooms and student-staff ratios, which did not allow for individualised attention, coupled with outdated teaching methodologies and teacher-centred curricula, long study duration and high drop-out rates, were among the most prominent of the many complaints about a higher education sector that was not equipped to respond to the demands of its time.

At the same time, more and more systems saw the need for increased autonomy of higher education institutions to enable them to face the widening range of demands and accelerating pace of international research competition better. The introduction of institutional autonomy and the simultaneous cutting back of state control could only be realised, however, in conjunction with heightened accountability provisions. Hence, in many countries quality assurance agencies were either created or transformed to meet these new demands.

The 90s were also a decade of increasingly celebrated cooperation. The European Pilot Project on Comparing Quality Assurance Methodologies among ? ve systems (1994, resulting in the Council Recommendations of 1995) was only one expression of the European optimism, which re? ected the hope that increased cooperation and mutual understanding would ultimately result in quality enhancement of all parties. We should note that the key methodological features which were elaborated then are still part of the methodological creed of today’s European QA Guidelines.

Finally, one should recall that the quality concerns of the 90s became all the more highly politicised as they became associated with the (lack of) competitiveness of European higher education, the latter being recognised as a key foundation of thriving knowledge economies. The concern with knowledge-intensive economies and societies moved higher education institutions, their problems and challenges, to the foreground. Quality enhancement became a charged theme and quality assurance its key guarantor. 1 Reichert Consulting, Zurich, Switzerland. 5

The Bologna Reform Process which became the focal point of reform in most European countries, from 1999 onwards, brought a wide range of quality concerns into the central arena of higher education discourse. Beyond the issues of quality assurance in the more narrow sense of institutional processes, quality enhancement can be said to be at the heart of all Bologna reform aims. Indeed at its origins, the Bologna reforms were conceived essentially as a process of quality enhancement, at least by the initiators of the reforms at European and national levels.

The Bologna reforms were based on the assumption that the international readability of curricular structures and the underlying quality assurance systems would increase cooperation and competition, mobility and institutional good practice, with quality enhancement occurring as a natural consequence of wider and deeper comparisons. A second assumption seemed to be that increased mutual trust in each others ‘quality assurance systems would result in increased trust in the quality of higher education provision in those systems, thereby resulting in cross-border movement.

Most importantly, in addition to new curricular structures, Bologna was supposed to bring quality enhancement in teaching: many higher education representatives believed Bologna would accelerate or even trigger the move to outcome-based and/or student-centred teaching in the countries in which traditional less inter-active approaches of teaching were still dominant. Quality assurance processes were supposed to support an increased institutional attention to the hitherto often neglected quality of teaching. Many students also associated the hope for more ? exible learning paths with the Bologna reforms.

Some academics welcomed Bologna curricular reforms as an opportunity for widening interdisciplinary courses. In particular, the possibility of disciplinary reorientation between the Bachelor and the Master level was seen as of bene? t to the new degree structures. Some students and academics also hoped for more space for independent learning and were later disappointed to observe the opposite effect: the compression of longer degree programmes into shorter ones often led to content and work overload, thus leaving less time for independent projects and learning.

To support these developments, quality improvements were also supposed to be brought about with respect to the transparency of student information and programme descriptions. Most prominently for some national systems, the Bologna reforms were supposed to enhance quality in the response of higher education to labour market needs. Graduates were supposed to become more “employable”, even though agreement on what such sustainable employability would mean in terms of student competences and desirable learning outcomes remains a heated and

largely unresolved topic of discussion. Last but not least, the Bologna reforms have addressed the quality of graduate education since 2003, in particular with respect to quality of supervision and supporting structures which help doctoral candidates prepare for diverse and often interdisciplinary academic or professional practices. Of course, all of these quality aims and visions are often tripped up by the reality of higher education funding.

As universities already pointed out in 2005 (Trends IV) and emphasised again in 2007, the most limiting factor for quality enhancement is not the nature of internal or external QA but the limits to resources when room for improvements is identi? ed. If we now zoom in on the quality assurance side of Bologna’s manifold quality concerns, we observe that Bologna has focused strongly on processes of quality assurance agencies, their exchange and mutual understanding, moving towards common standards and guidelines which allow comparability across Europe.

Other aspects of the various quality assurance ingredients of higher education such as quality expectations and the peer-review norms of funding agencies or journals were not part of the process since they relate more to research which only appears on the margins of the Bologna process. Beyond the many changes in quality assurance, which were introduced in the context of Bologna in 6 national systems, the development of the European Quality Assurance Standards and Guidelines (ESG) by

ENQA, EUA, ESIB, EURASHE, which were adopted by the Ministers of Education in 2005, are, beyond doubt, the paramount achievement at the European level. Their implementation is now assured through the European Register of Quality Assurance Agencies (EQAR) endorsed by the Education Ministers (London, May 2007), which requires an external evaluation of an agency every ? ve years and includes a judgement of substantial compliance with the European Standards and Guidelines.

Without going into the details of the principles and procedures for external and internal QA which the ESG sets down and which are widely discussed in the context of this Forum, I would like to highlight three achievements which I believe these standards have contributed to QA in Europe: First, the ESG emphasise strongly that the primary responsibility for QA lies with higher education institutions themselves, rather than with any outside body. This was already of?

cially acknowledged by the Education Ministers in Berlin and Bergen, but the ESG add the noteworthy remark that the external control should be lighter if internal processes prove robust enough, which is precisely what universities had been hoping for (see Trends IV report, 2005): “If higher education institutions are to be able to demonstrate the effectiveness of their own internal quality assurance processes, and if those processes properly assure quality and standards, then external processes might be less intensive than otherwise.

” The second achievement consists in the emphasis that internal quality assurance should not be reduced to formalised processes but should be likened more to a set of institutional and individual attitudes, a “quality culture”, aiming at “continuous enhancement of quality. ” Thirdly, the ESG, like the Bologna reforms in general, re? ect a certain shift to student and stakeholder interests away from the pure supply perspective which had dominated universities for decades. This attention is re? ected e. g.

in the concern with student support and information, with graduate success and, of course, with the demand for including students as active participants in QA processes, even as members in agencies’ external review teams. So what are or will be the consequences of these European standards for university development? Clearly, in some countries, there will be more regular reviews at institutional level than before, feed-back will have to be organised more systematically, and a more systematic use of data will have to be developed. Furthermore, the inclusion of students in QA will be new in some systems.

Some challenges will have to be addressed How can the teaching focus of the QA which the ESG restricts itself be integrated with concerns of continuous quality enhancement in research? How can the pool of peers be enlarged to include international peers, to allow for truly independent reviews, without incurring daunting costs and missing out on necessary knowledge of national conditions? And last but not least, with the increasing frequency of quality reviews, how can one prevent routine from settling in and undermining the motivation to invest the quality assurance with a genuine desire to identify one’s weaknesses and to improve?

To pursue these questions further, I would like to share some of my impressions from recent university evaluations in which I have participated and which gave me the impression that, in those institutions at least, internal quality assurance was alive and kicking and far from being a merely bureaucratic exercise. While these evaluations were all institutionally initiated and formative in nature, they differed widely with respect to their aims and the level on which the review focussed (faculty and department, institutional or national level). Accordingly, they also differed in the bene?

ts and challenges which they brought to the institution and which are worth pointing to, as they may show the complementary and diverse ways in which internal quality assurance can become a meaningful exercise. At faculty and department level, the bene? ts of the evaluation relate, ? rst of all, to the opportunity to connect curricular, institutional and research structures and activities around a common ground of the larger subject area which usually encompasses a wide number of ? elds, programmes and even disciplines but still within an orbit of rather compatible disciplinary cultures.

In addition to allowing the combination 7 of teaching, research, and institutional development concerns, this subject area perspective offers the advantage that academics get more easily engaged since they expect some feed-back on contents and not just on the institutional conditions of their core activities and scienti? c development. Furthermore, re? ections on institutional development are often more substantial if they are related to scienti? c development. Bene? ts also consist in the attention paid to real strategic decisions like hiring policy, restructuring, new interdisciplinary initiatives.

There is however an important precondition for effective feed-back, namely the link to institutional strategy and institutional autonomy (e. g. with respect to prioritysetting in recruitment, infrastructural investment. Without an effective link back to institutional policies, the outcomes of a review may well remain without appropriate consequences. Quality evaluations at institutional level can be an excellent way to sharpen strategic re? ection, addressing such questions as, for instance: • How to help the development of bene? cial institutional perspectives in de-centralised institutions?

• How best to combine disciplinary with interdisciplinary developments and institutional structures? • How to develop fair processes of rewarding performance in a non-mechanistic manner (leaving enough space for new initiatives) and still grant enough autonomy to de-central units? • How to combine bottom-up development drive with institutional quality standards? • How to identify and support institutional priority areas (hiring, infrastructural investment)? Of course, in order to be useful, such institutional reviews presuppose a suf?

cient degree of institutional autonomy, otherwise the recommendations and action plans which they are likely to bring forward cannot be realised. If institutional autonomy and some resources for addressing the identi? ed needs for improvement are given, however, they can contribute quite effectively to priority-setting and the professionalisation of university leadership and management. Of course, relative autonomy or negotiation power with the decision-maker is a precondition for the effectiveness of any internal quality assurance process, at any level of institutional development.

But other factors also play an important role for the success of the evaluation. First and foremost, one should mention the time and willingness of academics, deans and institutional leadership to take the evaluation process and recommendations seriously. This attitude is based on the expectation that the reviewers will offer friendly well-informed advice rather than being perpetrators of a control exercise with an agenda that does not take the aims of the reviewed unit as the decisive reference point.

One should add that every quality review which does not lead to some constructive development decision will undermine the readiness of academics and institutional leadership to engage in future evaluation processes openly and constructively. A second success factor consists in the frequency of the quality assurance cycle. If the reviews occur too frequently, this may result in evaluation fatigue and routine which would negate the motivation and the willingness to engage in genuine dialogue.

Thirdly, a careful choice of peers is vital. They have to be suf? ciently distant, i. e. without being too closely linked to the reviewed unit or in a con? ict of interests toward it. Given the small size of academic communities in most countries, this usually means that international peers have to be included. A careful choice will presumably also include the attempt to make the peer group cover different disciplinary areas to allow for enlarged horizons. 8 Fourthly, a well organised feed-back should ensure that there are well-re?

ected and well-argued consequences to ensure that institutional trust is built around the planned actions. In some form it would also be useful to create opportunities for feed-back of institutional reviews into national system re? ections so as to in? uence framework conditions that are set at national level. Finally, at institutional and national level, resources should be reserved not just for the quality review process but also for implementing the recommendations and that the resources for the improvements should be signi?

cantly higher than the resources for the review processes. If this cannot be guaranteed, one should reduce the scope of the review accordingly. Institutional quality assurance will be facing many challenges in the coming years: Education, research, knowledge transfer and services will have to become more connected in institutional development in general and quality assurance in particular. More meaningful and differentiated possibilities to benchmark and compare institutional performance internationally will have to be made available, well beyond the current reductive rankings.

Improvement-oriented QA will have to defend itself against the rising gusto for labels, branding and the resurgence of control orientation which even some Scandinavian QA agencies are beginning to observe in their environments. The number of QA processes which an institution has to undergo will have to be reduced in some countries. Synergies between different types of QA will have to be developed, to reduce the administrative burden on institutions.

Most importantly, it cannot be emphasised enough that the future of QA as a meaningful contribution to institutional improvement depends on the survival of the willingness of individuals to improve. With the increasing routine of QA, universities and their supervisory bodies are running the risk of creating evaluation fatigue and even resentment of the disproportionate burden caused by QA. Wherever QA is perceived as keeping professors from their research and teaching rather than helping them achieve even better and more innovative results in teaching and research, it has capped its own lifeblood.

The contribution of QA to self-improvement is clearly predicated on the trust which the evaluated place in the evaluators. Even the most sophisticated quantitative and bibliometric data cannot replace the value of interpersonal qualitative dialogue between peers who respect one another’s judgement. To sustain the basic trust in peer review, quality and funding agencies have to ensure the independence of the peers and beware of mainstreaming effects which may occur through peer review. The room for the radically new should not be taken away by conventional critics.

Selecting groups of peers that minimise that danger or maximise that innovative space is (and will remain for the foreseeable future) the key challenge of peer review-based QA. National and institutional policies should try to develop programmes which protect such spaces in which the new, unpredictable and unfamiliar can grow – both through funding instruments and quality assurance processes. Another challenge will consist in developing a system of differentiated and ? exible quality assurance, as foreseen by the ESG, in which external QA becomes lighter as internal QA systems become more robust and reliable.

Beyond the bounds of quality assurance proper, national ministries and funding agencies have to become more aware of the potential quality effects of steering instruments such as funding channels and national regulations on institutional processes. Finally – boring to repeat but all the more exciting to realise – national systems should accept that institutional autonomy is a necessary condition for effective quality assurance. Without such autonomy, coherent institutional QA will remain impotent and hardly worth the trouble. Likewise, only very few ideas for

9 improvement can be realised without extra resources. Thus, we may conclude that six conditions have to be given in order to ensure that QA is effective and worth the time and effort: • At the level of the individuals, there has to be 1. trust in the bene? t of the evaluation, 2. willingness to expose one’s weaknesses and 3. readiness to invest time and effort to improve one’s performance where need for improvement is identi? ed. • At the level of the institution, there has to be a capacity to realise the outcomes of the evaluation, i.

e. 4. a suf? cient degree of institutional autonomy, 5. institutional leadership to orchestrate far-reaching and dif? cult changes and 6. resources to support the change and incentivise corresponding initiatives. The more we undertake quality assurance without having taken care of these pre-conditions, the more we run the risk of letting it degenerate into a mere lip service, into a comfortable method for bureaucratic consciences to be soothed and for politicians to say that they paid attention to quality without meaning it.