Best practice in course evaluation
Best practice in course evaluation
Eric Bohms, Managing Director, Electric Paper Ltd
As the sector responds to the HEFCE Quality Assessment Review (HEFCE, 2015) on future approaches to quality assessment in England, Wales and Northern Ireland, it is important to consider this in the context of how this meshes with other fundamental political, structural and regulatory changes affecting the HE sector. Rises in tuition fees, additional international students, changes to student visa policies, and the imminent lifting of the student numbers cap, coupled with on-going governmental and policy changes have kept the sector in a state of flux. In response to this volatile and ever changing environment, monitoring and evaluating student attitudes has become absolutely imperative. Electric Paper regularly commissions research reports looking into the importance of course and modular evaluation at universities, and how these fit in with this constant context of change. The most recent research ‘Breaking Down the Barriers – How to Deliver Best Practice in HE Course Evaluation’ (Kennedy, 2015) conducted in-depth interviews with 12 Pro Vice Chancellors, senior academics and staff responsible for quality assurance, teaching and learning, and student engagement. The participants were from a range of institutions, including those in the Russell Group, new universities and private HE providers. The report is forward looking, investigating best practice and suggesting how potential barriers to effective course evaluation can be broken down. One of the overriding observations in the report is a need for sector-wide collaboration in the area of student evaluation, which makes the HEFCE consultation on quality assessment a timely exercise. There are, however, still many debatable points relating to this area of our activities and I aim to cover these in this and a subsequent article in Newslink.
“Whilst qualitative data capture is valuable, it does not go far enough in meeting the expectations of the internal and external stakeholders in HE.”
(a) Paper vs Online
In regards to course or module evaluation the Report suggests that all participants were in agreement that on-line methods provide quick and easy survey administration, which is key. However, for most HEIs, adopting on-line surveys invariably led to a drop in participation, which calls into question the entire exercise. So can this be remedied? The good news is that we are seeing, for the first time, that some institutions are able to successfully engage with course evaluation using online surveys. However, this approach is not universally recommended, as there are multiple factors that go into creating an evaluation culture that succeeds online, which is something I will cover in a subsequent article. In general, I would caution that in-class vs. out-of-class is the best way to deliver a high survey response rate, and this often means paper-based surveys. Any academic will tell you that, as soon as a student leaves that classroom, it is very hard to get them to complete the survey. Given the ever increasing requirement for valid and reliable student survey data, paper based questionnaires are a safe and trusted approach for use in creating a baseline policy and data reporting information, and for informing the transition to online methods (see Smith and Morris [2011], p. 7, regarding improving response rates). It is important to be mindful that there are many stakeholders concerns that must be addressed and IT systems that must work seamlessly for online module evaluation to work properly. (b) Social Media The Kennedy (2015) Report finds social media is now a factor in how students themselves evaluate and feedback to universities, which raises the question ‘is this something that should now be incorporated into course and modular evaluation practices’? Clearly social media in this context poses a new and strategic reputational risk for HEIs. As an example, in the US there are academics that actively push their students to provide feedback on their teaching on the social media site ‘ratemyprofessors.com’. This example is not unique, with new sites popping up all the time, making social media more important than ever. However, such tools are not perfect (see, for example, Legg and Wilson, 2012). I would argue that HEIs must monitor their own teaching quality internally through course evaluation and programme level surveys in order to both protect the reputation of the institution and the reputation of their academics.
(c) More centralised form of course and modular evaluation
It is clear that analytics, and learning technologies in general, are becoming the norm requiring better data handling, both within the student records systems as well as from institutional reporting mechanisms. Business Intelligence and fast and clear visibility, focused on key performance indicators (KPIs), are providing an ever more informed executive team with valuable resources to make strategic decisions. This change in focus is putting pressure on administrative teams and IT resources to provide a validated baseline and joined up IT infrastructure. Closing the loop no longer means providing students feedback on survey results; it also means that modular feedback results are now being included in a wider data driven loop to better inform department, faculty and executive management. However, this has the potential to impact on faculty and department autonomy, meaning it is more important than ever to create policy in regards to defining data visibility and business rules meant either for the department, faculty or the institution. As the HEFCE Quality Assessment Review consultation on future approaches to quality assessment in England, Wales and Northern Ireland moves forward, setting out proposals for future approaches to quality assessment, the writing on the wall is clear; quantitative data integrity and how its managed underpins enhancement and assurance systems. Whilst qualitative data capture is valuable, it does not go far enough in meeting the expectations of the internal and external stakeholders in HE. It is important to proactively develop systems around fast moving requirements for assessing engagement, learning or student outcomes, and perceived learning, as well as providing ways to benchmark at modular level and programme level. However, we see huge variety of opinion as well as interpretation, both within the sector and sometimes within the HEIs themselves, as to what all this means.
BOOKINGS ARE NOW OPEN FOR THE NEXT HIGHER EDUCATION COURSE AND MODULE EVALUATION CONFERENCE
7 February 2017
Nottingham Conference Centre
For further information and to make a booking visit aua.ac.uk/events
Eric Bohms
Managing Director, Electric Paper Ltd
Eric has over 19 years’ experience of working in the software sector. Before setting up Electric Paper Ltd in 2009, he held a number of national and international roles across sales, marketing, operations and projects as well as setting up Cardiff Software Ltd in 1996. Eric gained a Masters of Science in Technology & Innovation Management at John H Sykes School of Business, University of Tampa and a Bachelor of Arts degree in History at University of San Diego. He is passionate about the HE sector and has specialist knowledge of HE policy and processes involving student experience. He has also specialised in disruptive technologies, new product development, knowledge management, survey design and deployment, and testing and assessment.
Article originally published in Newslink 82 (November 2016)
References
- Hefce (2015), review of quality assessment (www.hefce.ac.uk/reg/review/)(accessed 22 July 2015)
- Kennedy, J (Ed.), Breaking down the barriers – how to deliver best practice in HE course evaluation (Electric Paper Ltd.: London), 12pp.
- Leg, AM and Wilson, JH (2012), RateMyProfessors.com offers biased evaluations, Assessment & Evaluation in Higher Education, 37(1), 89-97.
- Smith, P, and Morris, O (Eds.)(2011), Effective Course Evaluation – The Future for Quality Standards in Higher Education (Electric Paper Ltd.: London), 14pp.
Additional reading
This report commissioned by Electric Paper is the third in a series of publications investigating and promoting effective course and modular evaluation in Higher Education. It follows:
- Effective Course Evaluation: The Future for Quality and Standards in Higher Education (2011) and Closing the Loop: Are universities doing enough to act on student feedback from course evaluation surveys? (2013).
- Bennet, P (2015), Planning for enhancement using programme surveys (accessed 22 July 2015).
- Cholerton, S (2015), A consistent approach to the evaluation of teaching: a tool for ‘Raising The Bar’ (accessed 22 July 2015).
- Access to further information and reports by Electric Paper can be requested the Evasys website.
0 comments on “Best practice in course evaluation”