The Language Teacher
October 2001

Nonmeritorious Features of the Entrance Exam System in Japan

Tim Murphey

Yuan Ze University



Meritocracy is a social system in which people get rewards because of what they achieve, rather than because of their wealth or status. . . . If you describe something as meritorious, you approve of it for its good or worthwhile qualities. [Collins Cobuild Learner's Dictionary, 1996, p. 689. Note, "meritorious" follows meritocracy.]

While some people may describe Japanese education as a meritocracy (Zeng,1999), there are several indications, discussed below, that this is not so for many participants. "The entrance exam system" is actually many sub-systems, some of which are evolving and some which seem stagnated. At this point in time, the national university system shows signs of shifting while the private university system, accounting for over 70% of the universities in Japan, appears stagnated. From my personal experience on exam making committees and through talking to hundreds of teachers at Monbukagakusho leaders camps for the past six years and to students for the past 11 years, I find the system is still bleeding the life out of the youth of Japan. And each February universities celebrate the success of their exams (especially financial), as the meaning of education in high school dies a little more.

Many other observers see the exam system as it is operating now as unmeritocratic and harmful to the youth of Japan (see Hood, this issue, and the On CUE 1996 special issue devoted to describing the system with J. D. Brown's newspaper articles and the email correspondence of many concerned JALT members). Why? Because (and here is my main point): The exams themselves are not evaluated for validity yet they are the basis for admission to universities, the ranking of departments in universities, and often the teaching methods in high schools (Gorsuch, 2001). Thus, we are basing our efforts at being fair on potentially invalid measurements. J. D. Brown has repeatedly noted this lack of assessment literacy among university entrance exam writers in Japan; I have also written about it in my support of the successful TESOL Resolution on English Entrance Exams to Schools and Universities (Murphey, 2000).

The above description appears to be the case mostly in private universities, although many colleagues at national universities have told me similar stories. There is evidence the Center Test is getting better and some national universities appear to be improving their exams as well (Guest, 2000; Mulvey, 1999, 2001). At the same time, more departments at national universities are adding listening comprehension. These are positive changes, however, I find it dangerous to overgeneralize these to the whole system when 70% of the universities are private and to my knowledge none are adding listening to any new department exams. And still no one is openly reporting their exam reliability and validity if, indeed, they even have them themselves. True, lower level schools may be accepting nearly all applicants, making the exams simply an expensive admission fee. But the middle and higher ranked schools are still subsidising their budget with the many failures and not showing many signs of change. When things begin to shift more completely, there will certainly be needed changes in high school HS teacher's pedagogy as the above writers imply. However, at present, most HS teachers at academic schools tell me they are still feeling the pressure of traditional exams.

In this short piece, there are three nonmeritorious activities I would like to look at: recommended-student exams, professorial guessing of exam questions, and ignoring the public desire to communicate in English. Obviously, more research is needed along the lines of the work cited above to actually document how many universities are changing, what kind of changes are occurring, and where. Then, perhaps, more universities can be persuaded to do a professional job and more authentic pressure can be put on HS teachers and textbooks to change.

Exams for Recommended Students

First, entrance by recommendation (suisen) presents a clear example of favouritism and a double standard. There are many different kinds of suisen and a multitude of procedures at different universities. There are some regulations from the Ministry of Education: "Applications should be sent in November or later, and those who pass can comprise a maximum of 50 percent of the enrolment" ("The good and the bad of AO exams," 2001, p. 7). One testing researcher reported to me, "Many of the tandais take 80-90% of their students thru (sic) suisen" (T. K. Lutes, personal communication, April 3, 2001). With some suisen, students are obliged to go to that school once accepted (securing an early income commitment for the school). Often only select high schools chosen by the particular universities can send recommended students (it is not open to all). These students, who are recommended, are supposed to be good students; however, some research (Redfield, in press) shows that recommended students are significantly lower in level than those entering with the regular tests, which correlates with results at my previous university as well. This happens because high school teachers choose the students they think may have trouble getting in, but are otherwise nice kids (personal communication with numerous high school teachers). This means that many brighter students who are not recommended must suffer through exam hell, paying significant amounts of money and taking many exams to increase their chances of being accepted somewhere. Thus, we find that "where you come from," "what kind of high school," even "what religion your school is affiliated with," or simply "how nice you are in your teacher's eyes" can all count unfairly. What is tragic is that bright HS students who are not recommended might not make it into a university because of the second area I wish to describe: the untested tests themselves.

Professional Guessing

A system striving for meritocracy would do all it could to make sure their instruments were measuring and evaluating as fairly as possible the ability of students. It is time academics realized that testing is a professional field and knowledge of it does not come naturally with everyone's graduate degree (especially in literature and linguistics). Most professors I have dealt with simply believe they can make up good questions using their intuition, and that if they follow previous exam formats and discuss them among their colleagues the end product will be a good exam. Because of our status as university professors, others think we know what we are doing (and we even start believing it after a while). Thus, merit is not the judge, but status.

We now have the ability to analyse test data after exam administrations and determine which questions worked well and which ones did not. We can use this knowledge to make better questions and tests, even recycle the good questions later down the road. Yet, there are precious few university professors who understand such data or would look at it if indeed it were tabulated.

As Tsukada (this issue) states in reference to the exams, "once one is selected in the examinations, then it is proven that one has ability" (which of course many university teachers, and the students discover later, is not the case). We could say the same fallacy applies to educators, "once you teach at a university, it proves you have the ability to make tests and judge others." It doesn't! We are assigning qualities to university teachers and having them do work they have no expertise in. This is my experience, and probably the experience of most university English faculty in Japan who cannot speak up for fear of losing their high paying jobs. When silence is bought with money and job security, the educational system is killing itself. Tsukada, citing Takeuchi's 1995 depiction of meritocracy, also talks of "a limited orientation" among exam takers and salarymen in that they are usually only concerned with the immediate goals "without personal meaning or passion." The same can be said of exam developers, university teachers who merely wish to produce the exam and be done with the distracting committee work: Most do not consider the potential long-term washback effect of the tests on HS teaching, nor their espoused missions for human dignity (Murphey, 1999). There may in fact be some valid exams, but until their results are analysed we simply do not know. Simply publishing the tests in circulation does not make them fair. The linguistic and pragmatic analyses of these tests are useful, but we need to look at how students actually do on these tests, question by question, to see how valid they are. I have found repeatedly from my own experience on committees that professors' own intuition is far from accurate when judging the merit of a question.

The Desire to Communicate Ignored

The lack of communicative content on the majority of the exams is another strike against meritocracy. From the top down, Monbukagakusho has encouraged communicative language teaching (CLT) reforms in high schools through curriculum changes and the JET program. But at the same time Monbukagakusho has refused to stick its neck out and demand that all schools put listening on their English exams, as Korea has done. There are plans to put listening on the Center Test in 2003, or "in the near future" as one Monbukagakusho employee hedged recently when called for confirmation (June 18, 2001). From the bottom up, the public has shown (through vast spending on conversation schools) that they wish to develop their communicative use of English. Why then do most universities continue to test things that neither the people want to learn, nor what Monbukagakusho openly asks for, thereby shaping English teaching in the high schools and jukus? Or, we might ask the opposite, why do jukus and JHS and HS teachers not speak out more strongly for changes in the entrance exams? Is there (as suggested by one TLT reader) a circle of complacency in which the jukus lead the JHS and HS teachers in stagnating the system?

After recently interviewing three middle management juku teachers in a large juku chain (June 25, 2001), I no longer have a conspiracy theory. While jukus can be faulted for feeding off the system as it is, in my opinion it is the ignorance and fear of change and blame that keep university staff from openly talking about the exams, educating themselves, and risking changes. The jukus are flexible and adaptable businesses; the universities are mostly rigid hierarchies of amateur testers, fearful of risk and change, propagating the status quo.

The Edo period may have begun with movements toward meritocracy, but Japanese education has long since derailed. The signs of system breakdown are evident: school refusal, collapsed classrooms, and increased school violence (Yoneyama, 1999, and this issue). While most people do not trace it back to the mindlessness in the system stemming from entrance exams, university educators do have the power to change it. For the time being, the bloodletting of Japanese education will continue, and though many universities may claim successes in enrollment and revenues generated by their exams (while others go bankrupt), every day high school English education dies a little more.

References

Guest, M. (2000). "But I have to teach grammar!": An analysis of the role "grammar" plays in Japanese university English entrance examinations. The Language Teacher, 24 (11), 23-28.

Gorsuch, G. (2001). Japanese EFL teachers' perceptions of communicative, audiolingual, and yakudoku activities: The plan versus the reality. Educational Policy Analysis Archives. [Online]. Available: <epaa.asu.edu/epaa/v9n10.html>.

Hood, C. (This issue). Is Japan's Education System Meritocratic? The Language Teacher, 25,10.

Mulvey, B. (1999). A myth of influence: Japanese university entrance exams and their effect on junior and senior high school reading pedagogy. JALT Journal, 21 (1), 125-142.

Mulvey, B. (2001). The role and influence of Japan's university entrance exams: A reassessment. The Language Teacher, 25 (7), 11-17.

Murphey, T. (1999). For human dignity and aligning values with activity. The Language Teacher, 23 (10), 39, 45.

Murphey, T. (2000). International TESOL encourages assessment literacy among test-makers. The Language Teacher, 24 (6), 27-28.

On Cue 4 (2) (1996). Special issue: University entrance exams in Japan. Published by JALT's Special Interest Group: College and University Educators.

Redfield, R. (In press). Do entrance procedures make a difference: Comparing English levels of regular and recommendation entry students. Osaka Keidai Ronshu: Osaka Keidai Gakkai.

Tsukada. M. (This issue). Choosing to be ronin. The Language Teacher, 25 (10).

The good and the bad of AO exams (2001, February 5) Daily Yomiuri, p. 7.

Yoneyama, S. (1999). The Japanese high school: Silence and resistance. London: Routledge.

Yoneyama, S. (This issue). Stress, disempowerment, bullying, and school non-attendance: A Hypothesis. The Language Teacher, 25 (10).

Zeng, K. (1999). Dragon gate: Competitive examinations and their consequences. London: Cassell.



All materials on this site are copyright © by JALT and their respective authors.
For more information on JALT, visit the JALT National Website