The Language Teacher
July 2001
The On-Line Teaching Evaluation Regional Workshop
Richard Blight
Ehime University
The On-Line Teaching Evaluation Regional Workshop was appropriately held at the City University of Hong Kong on February 28, 2001. I say appropriately because a number of the higher institutions in Hong Kong have maintained a focus on technological development and are currently at the forefront of on-line implementations. The workshop was intended to further an exchange of views and experiences between institutions committed to on-line student evaluation systems. Some 85 people from throughout the Asia Pacific region attended, including a mix of language teaching and technology professionals from Hong Kong, Australia, China, Malaysia, Singapore, Taiwan, and myself from Japan. The conference was also beamed as a live broadcast via WebCam to viewers from far corners of the world, who were invited to provide input into the round table discussions at the end of the day. The theme for this year's meeting was Issues and Problems of Using the Web for Student Evaluation of Teaching (SET) in Higher Education Institutes.
Sponsored by the Centres for Enhanced Learning and Teaching (CELT) at the City University of Hong Kong and the Hong Kong University of Science and Technology, it is worthwhile considering the scale of technological innovations currently being implemented. Large-scale centralized on-line systems to collect and process SET data on a campus-wide basis have now been developed as major Information Technology (IT) projects involving sizeable expenditures; one such system costing one million Hong Kong dollars (about sixteen million yen). The general goal of the CELT programs is to promote continuous improvement in teaching (and so to enhance the effectiveness of learning) through adoption of teaching technologies, teacher development, and quality assurance programs. Project teams currently developing on-line evaluation systems were now needing to justify past expenditures in order to renew their budgets. So there was a common purpose to the conference, with many participants hoping to gain insights into some of the complex issues (and potential resolutions) that other institutions were encountering.
The keynote speakers were Professor Olugbemiro Jegede (Director, Centre for Research in Distance and Adult Learning, Open University of Hong Kong), and Dr Mike Theall (Director, Center for Teaching and Learning, University of Illinois at Springfield). Professor Jegede discussed concerns with using student input as valid and reliable sources of evaluation data. He cited research which substantiated the data as being valuable for analysing teaching processes, while also remaining relatively constant over time. Statistics showed a common problem with evaluation questionnaires was that only about a 30% response rate was achieved, which was clearly insufficient to provide meaningful results. Monitoring and enforcement of student submissions was evidently necessary, either as a compulsory homework activity, or during a regular class session. Students should be required to submit questionnaires within a set time frame (with teachers following up late submissions), or the class could be taken to a computer lab for a twenty-minute session during which time the survey would be completed by all students. Jegede argued that the substantial gains to be made in areas of time, cost and efficiency of administration more than accounted for the loss of classroom time. With some institutions currently handling as many as 50,000 paper evaluation forms per year, each of which needed to be prepared, delivered and collected (either by post or teacher), and then entered into a computer system (by optical card reader or data-entry clerk), there exists clear scope for significant administrative gains. But as Jegede observed in his conclusion, the long-term success of any on-line evaluation system depended on demonstrating significant savings over the existing paper-based system. People and organizations would be reluctant to adopt new technology unless the perceived benefits far outweighed the costs of implementation.
During the workshop, presenters discussed various aspects of designing and developing on-line SET systems. Major systems currently being implemented were also described. In the round table discussion, student representatives reported that students generally preferred on-line systems since questionnaires could be completed according to individual schedules, but they were concerned about whether responses would remain anonymous, since they were required to log into the system using their student accounts. Indeed the use of student logins is an important part of on-line systems generally, since responses to questions can then be matched against enrolment records, which allows for more substantial analysis of results (e.g. collating and comparing results from groups of students in different departments, or according to year level). Students were agreeable to this type of data-matching, but despite being assured their anonymity was guaranteed, didn't feel secure in being asked to trust a system about which little was revealed or explained.
And while speakers tried to maintain a focus upon the on-line aspect of student evaluations, it is not possible to discuss SET without reference to some of the fundamental issues underlying evaluation processes generally. Particularly important is the significant likelihood of student evaluations being used by program administrators as a practical tool to measure the comparative teaching effectiveness of foreign language teaching staff. But this approach needs to be mitigated by knowledge of broad limitations of the SET process. There is the complex problem of mapping the degree to which students' scoring of a teacher's skill actually reflects learning gains, which are the intended and primary object of instruction. Students may believe they have learnt a great deal, while this in fact may not be the case (and equally, the contrary situation). There is also a tendency for students to rate teachers in terms of how much they liked the teacher, rather than how much they learnt during the course. According to this common misconception, the entire value of SET is undermined, with data providing no more than a measure of the relative popularity of various teachers. And given the difficulty of measuring learning gains generally, any statistical relationship between a teacher being popular and being effective would also be difficult to determine. But to summarize, as argued by Dr Theall, the implementation of an on-line system provided substantial gains in areas of administration and in the possible depth of analysis of results, but in no way facilitated the underlying complexities of the evaluation process. Teacher evaluations will always remain a high-stakes business for those directly involved, and one which cannot be determined by the nature of the process undertaken. So it is first necessary to develop a system of evaluation that suited the specific needs of each institution, with questions about whether (and how) to go on-line being raised subsequently and in relation to the needs of the specific systems that had been developed.
Many other issues were discussed. Should teachers be permitted to provide input to the content and form of the questionnaire? How should institutions deal with teachers who routinely received low scores from students? Should such teachers have their employment terminated? And what about teachers employed on a tenured basis? How many institutions could afford to support staff with professional development programs, which could be set up to give under-performing teachers an opportunity to improve their skills? And the fundamental questions which recurred: how can the accuracy of results be assured, and to what extent should institutions determine the course of professional careers on the basis of SET data? To guard against over-relying upon SET data, teachers wishing to extend employment contracts at some institutions are required to submit a portfolio of their work, one component of which is a set of student evaluations.
In closing, I can't help but notice a few passing ironies. In the higher education sector in Japan, where EFL instruction is mostly delivered by contracted native speakers and tenured non-native speakers, student evaluations could similarly be used as a measure of job performance for determining extensions on native speaker contracts. But bearing in mind that the primary goal of SET is to improve the quality of the learning program (as experienced by the thousands of students attending each institution), couldn't more substantial learner gains be achieved by applying student evaluations to the non-native teachers, the majority of whom do not hold (postgraduate) qualifications in teaching a foreign language? And finally, I hope the conference organizers won't mind if I don't return the survey on the On-Line Teaching Evaluation Regional Workshop. The purpose of which, of course, is to provide for an improved meeting next year.
All materials on this site are copyright © by JALT and their respective authors.
For more information on JALT, visit the JALT National Website