Media Log

 

I can get the great score in English test! But I can't speak!

 

I was always avoiding foreigners when they asked me something in Korea.

I have studied English for a long time under the Korean English Education. As most people know Korea is an EFL (English Foreign Language) country like Japan and China so I and also other students have mostly focused on studying English. Recently, all students, even though some children are under 6 years of age are studying an evitable English subject due to their parents, their future like getting more satisfactory jobs and entering the high level of university even if they are not interested in studying English. However, with this stern reality, I and many students have perceived that the English we have learned is not practical. I am wondering about why do I think about it or why do I think that it is not suitable English teaching and learning method so then I would like to talk about my learning English experiences and some problems in Korean English education system.

When I was a primary school student

, I did not learn English at all, moreover, English was not important in the educational context so I used to study mathematic or Korean language because I did not have a chance to learn English and did not know about English. Most people did not recognize the importance of the English subject at that time.

However when I entered the middle school,

 

surprisingly English became a very important. It is the reason why I had to study English and also memorize a lot of vocabularies. I took the vocabulary test every English class I began to feel English class was very hard and boring and also was to be burdensome on me. Also, I tried to just solve English questions like multiple choice and grammar translation. I did not have a chance to speak English even though I was in the English class. When my English teacher taught English strictly, we had to read an English textbook and keep silent in the class. English class was constituted mainly of reading, writing or translation. Moreover, the worst case was that there were approximately 50 students in one classroom. At that time, in this class, we were not expected to speak English as we just needed to answer the questions English.

English class in high school

 

was also as unsatisfactory as the middle school English class. I thought that English class in middle school was maybe a little bit better in terms of the spoken English part or the colloquial English. Although I studied a higher level of English in high school, it was far from communicative English and it was just a method of getting a high mark on from the English test. I had the most important test which was the university entrance test. This test did not assess the student’s speaking English ability and I did not need to be able to speak English well. When I recall my English study, unfortunately I think that I have not learned practical communicative English. I have lamented my poor English when I have had a conversation with a foreigner. Occasionally, I have avoided the foreigners for this reason. So why did I have to study this English and what is the reason why English class focus mainly on English exams. I think the type of English education is reflected in Korean society especially work place. International English exams like TOEIC (Test Of English for International Communication), TOEFL (Test Of English as a Foreign Language) has become a standard English evaluation so most of students and also businessmen study these exams for getting jobs and their promotion. Such an unsuitable English situation have root Korean, and it could influence English education. The reason why I criticize about Korean English education strongly is that English can not be speaking English. Of course, there is a different gap between Korean students but I have met many Korean people who regret their English ability because of Korean English classes. Moreover, I sometimes read articles that state many Korean children go to English countries for studying English because many parents could not satisfy the style of Korean English education.

In my experience, when I visited my aunt’s house, I conversed with the aunt’s children about their English classes. They said that they could speak English better than other students. I wanted to know their English ability so I asked them some vocabulary, they did not pronounce the words but spelled the words and meaning for example House- ‘H O U S E’ and ‘집

 

’ (it is a Korean word and the same meaning). They did not answer like ‘háus’. They regarded the memorization of spelling as the most important skill in terms of learning English. I thought this situation was to be problematic.

Slow change

 

; recently, many parents have high education so they can recognize which teaching English method is more helpful for their children. Under the serious English education in Korean classes, many students are still just reading the English textbooks and translating English to Korean so it can encourage going abroad for studying English. However, fortunately, this unreal English education has gradually changed little by little to be close to speaking English in elementary school so some English teacher has tried to make more actual speaking English class. In addition, the writer of English textbook also has tried to make more “real life”. Although I did not learn this English education, from now, other children have to learn more useful English so they should not avoid foreigners when some foreigner ask them the way and should say that we can answer the English questions! And also we can speak English!

 

 

Postscript-Major Essay

I am going to develop this essay through putting social aspects and global conception. It means that why Korean education taught the poor English to students and now how the global world can effect Korean education.

 

[ Caution ]본 Essay를 통해 여러분들이 수행해야 할 과제에 큰 도움이 되길 바라지만, 지나친 문장 도용과 표현은 논문 필터에 걸릴 수 있으니 주의하세요~~ ^^

 

 

1. Introduction

 

In the 1970’s, the Korean College Scholastic Ability Test was initiated to assess the students’ English ability as a foreign language by the Korean Institute for Curriculum Evaluation (KICE). Hence, many students who are preparing to take the test have studied in both public and private English schools with their objectives being to do well on this test. The English Tests in CSAT originated from the requirement to assess the students’ ability to study at university and to diagnose what they have learned for educational purposes. However, even though CSAT has crucial goals  in relation to assessing students’ foreign language achievements, CSAT has some difficulties that have beenidentified, such as low validity and some problems in its application.

 

Therefore the purpose of this paper is to analyse the overall description and characteristics of the English tests in the Korean College Scholastic Ability Test in 2007, and to suggest theoretical approaches to improve the test. This evaluation consists of an analysis of several items on the listening and reading test. Although the College Scholastic Ability Test has four main sections (Korean language, mathematics, foreign language, social studies and science), I will highlight and analyse only the English test, in three parts; the test of general information, the framework, and theoretical approach to the KCSAT

 

2. What is the CSAT?

 

 The CSAT is a test designed to assess the level of college scholastic ability. All students who want to enter a university must take the test at the same time. They all take the CSAT at the one time each year, in the middle of December. The English test in the College Scholastic Ability Test contains a variety of contents ranging from English conversation to cross-cultural communication, in order to assess the candidate’s understanding of linguistic knowledge and communicative skills. The Korean Institute of Curriculum & Evaluation (KICE) supervises the College Scholastic Ability Test (CSAT)’s development and implementation procedures. CSAT scoring and data analysis are also managed by the KICE. However most people cannot access the information about who designs this test every year because CSAT is a nationally significant test. The point of this is one of secrecy. The main purpose of the CSAT is to measure the candidates’ abilities as to how well they can understand English and communicate in terms of the macro-skills, including the testing of higher-order thinking abilities together with the students’ understanding of cross-disciplinary materials in the English test. According to the Korean Institute of Curriculum & Evaluation, clarification of the crucial aims is about the linguistic knowledge and English skills of the KCSAT as set out below.  

 

1.      Measuring ability to write English in the complete reading comprehension.

2.      Measuring speaking language skills rather than linguistic knowledge

3.      Measuring ability to understand daily English conversation

4.      Measuring reading ability to summarise and infer the main points

5.      Measuring ability to understand facts, to make inferences, and to apply language resources to problem-solving.

 

As follows from this list, CSAT has been involved in implementing these goals in public English instructions. Moreover, KICE has modified and promoted the development of CSAT in this form as being crucial for assessment.

 

2.1 Clientele

 

Year 12 students in Korea do not have enough free time as they are always concentrating on the Korean College Scholastic Ability Test (KCSAT). They are always struggling to improve their scores as this test is seen as the last means of upgrading their social position as it is a high-stakes test. Among the various tests - the Korean language, mathematics, social studies, sciences and vocational education in KCSAT- the English test section needs more student effort in the developing of their abilities. Korean students learn English language from primary school in the public curriculum. The English curriculum is changed and modified by Korean educators every 5 years. The current Korean, 7th English Curriculum which was initiated 2003, is intended to educate the students in the curriculum objectives as follows:

 

To help communicate naturally about daily life and topics of general interest

To develop their communicative competence

To understand and utilize a variety of information from other countries

 

As I mentioned above, the objectives of the Korean English curriculum is to facilitate the students by gaining their interest and extending their English competence as well as linguistic knowledge. However the English test in the KCSAT, as the foreign language test, still has not been sufficiently linked to the English curriculum. Such a problematic point needs to be addressed by Korean educators.

 

2.2 Context for which test has been developed

 

English education in Korea has experienced educational policy changes over a long period, towards more effective and communicative language teaching. Therefore the English test in the context of KCSAT has been shaped and modified gradually according to these changes.  The context in which this test has been developed clearly bases the public education English instruction on its purposes. For example, the extension of English listening instruction in the English curriculum has caused listening items to be added to the KCSAT test, with an increase from 8 items to 17 items. This change of content can positively affect the candidates’ improvement incommunicative skills, which will be further mentioned in a discussion on wash back. 

 

2.3 Frame of reference

 

There is no information available for me to be able answer this question. Why not??

 

2.3 Scoring procedure

 

KCSAT which is an objective form of assessment consists of all multiple-choice questions so that this test only needs the candidates to provide simple answers. The candidates have to mark their answers on the Optical Mark Reader cards so it is easy to score and mark their results. Such a system has a high reliability for the scoring procedure. The English test in the KCSAT has 50 multiple-choice items, each item being scored at between 1 and 3 points, depending on the degree of difficulty. The English test in the KCSAT in 2007, has 100 points as the maximum raw score. When I analysed the KCSAT, item scores were distributed as shown below,

 

         1 point Questions x 3 items = 3 points

         2 point Questions x 44 items = 88 points

         3 point Questions x 3 items = 9 points

                                                                          Sum: 100 points

 

Finally, the candidates receive their scores at their schools around a month after the test.

 

3. Tests, Design, Content, Format and Procedure

 

3.1 Administrative and scoring requirements

 

According to the Korean Institute of Curriculum and Education, around 800,000 students per year, take this CSAT test in 950 locations in Korea. Hence, the Korean Institute of Curriculum & Evaluation, as the SCAT administration, needs to set up the testing centers and prepare many trained personnel for the test’s implementation every year. On the examination day, most public schools have been changed from common schools to test centers in order to provide sufficient test places. Almost all of the trained personnel consist of incumbent middle school teachers. In addition, the police officials control the traffic and any situational problems in order to ensure a completely satisfactory test environment. The CSAT test does not need much equipment for test performance but students should bring their OMR (Optical Marking Reader) pens as a test instrument. In the procedure of the test, for listening an audio system is required, hence, each place has to have installed a radio system. On the test day, the candidates are offered the best environmental conditions, which are under the control of the Korean government and the KICE, in terms of traffic, equipment and environmental conditions.

 

3.2 Test design

 

As the English test is a kind of proficiency test, it assesses the students’ English listening and reading ability by multiple-choice items. This item format is such that it is to mark the test for a huge number of test candidates, and it also has good reliability. The English test has a disadvantage in that it does not properly assessing English ability in a diverse way for the reason that the test consists of only multiple choice items. The foreign language (English) test in the CSAT consists of 50 items including 17 listening items and students sit this test for 70 minutes. The English listening test starts from 13:20 for about 20 minutes, and the reading test starts after the listening test. Although all the scheduling, data, and scoring of CSAT are administered by the KICE, I could not find the information about who designed this test. Because this test is also one of the nationally crucial tests, the KICE keeps these details confidential.

 

3.3 Instruction/Rubric

 

The KCSAT test takers are assessed by the multiple-choice items by following each rubric. The rubric shows the candidates the way to answer questions clearly in the Korean language. Through the concise and clear Korean instructions, the candidates do not have any problems answering the questions in both listening and reading section. The rubric for the listening and reading questions ensures that they make good sense of what the questions require and provides valuable information about the questions.

 

3.4 Item content & Format

 

Listening Items

Listening, which is a complicated process, is different from the other macro skills. Therefore to measure the listening skills of the candidates, diverse approaches are needed in the test. The listening test of the KCSAT is aimed at measuring the test-takers’ micro and macro listening abilities such as the range of the vocabulary, grammatical knowledge, and understanding of the overall content and comprehension of a conversation. Hence the listening part of KCSAT involves various topics and content such as the analysis of a graph, descriptions, inference questions and so on. For the listening test,  the emphasis is on the measuring of listening comprehension ability and the global listening comprehension skills. This test therefore requires the candidates to find the proper information and to solve the situational problems for the various topics being used.

 

Reading Comprehension Items

The 33 reading question items in the KSCAT include a variety of topics covering natural science, practical passages, tasks and cultural content. Moreover, there are many different types of genre such as reports, discourse, and narrative. The reading test is presented in many test item formats including cloze, true and false, matching information and short answer. However, in the same way as for the listening test, all the answers should be provided through multiple-choice items. Generally, the length of each text in the reading test ranges from one paragraph to three paragraphs. There are 33 reading questions which should be answered by the test takers in 40 minutes, which is not enough time to complete their test. Hence this reading task requires the ability to understand each task as quickly as possible.

 

4. Overall characteristics

 

4.1 Validity

 

Weir (1993) notes that validity is the first factor for consideration in a test. Validity can show the degree to which a test measures what it is supposed to measure. Therefore validity should be analysed and considered to decide whether a test is valid or not.

 

Hughes (1989) defined each characteristic of validity as follows;

Face validity: if it looks as if it measures what it is supposed to measure, this test can be understood as having high face validity. Therefore the KCSAT goals should be taken into account for the demonstration of validity. KCSAT has as its purpose, to measure the students’ linguistic ability and communicative skills. According to KICE, the goals of the KCSAT test are as below,

 

Measuring ability to understand facts, to make inferences, and to apply language resources to problem-solving

Measuring ability to understand daily English conversation

Measuring reading ability to summarise and infer the main points

 

KCSAT, which involves 17 listening items and 33 reading comprehension items can be shown to have high face validity. This test measures the students’ linguistic knowledge such as their range of vocabulary, pronunciation and grammar as well as general understanding of reading passages relating to genre and situational information. In the listening task, students are given 17 situational listening passages, so they are required to find the correct answers with their linguistic ability. For the reading task, which consists of 33 reading comprehension questions, it is especially expected that students should use their linguistic knowledge and ability in reading comprehension in a diversity of functional genres. Such characteristics of the test require students to demonstrate their overall linguistic knowledge according to the test’s objectives.

 

The various types of genre and functional passages in both the listening and reading tests can be seen to represent the relevant test objectives. The listening and reading questions, which are based on linguistic knowledge for diverse topics, also afford to assess the students’ knowledge of the general structure of English. Also these test constructs elicit not only candidates’ linguistic knowledge but also prior knowledge. Therefore this test is likely to measure test specifications and structures appropriately.        

 

 In the reading task, most tasks require the candidates to understand the whole content and the concepts in the reading sub-test. The candidates need to be able to understand of the overall passage and structures from their linguistic knowledge. However, some reading questions just require the students’ grammar knowledge as we can see in questions number 22 and 23. These questions do not require students to demonstrate their reading comprehension ability but just see if students can infer the answers from their grammar knowledge only. These problematic questions show low construct validity. Hence these problematic questions should be of concern for test designers. Also the imbalance of questions between 17 listening and 33 reading items cannot be understood as there being a good test to measure the students’ overall foreign language ability.  

 

4.2 Authenticity & Interactiveness

Bachman (1996), who defined authenticity as being the quality of the relationship between features of the test and real life, states that authenticity is to be analysed from two aspects; the real life and situational. In the case of the English test in KCSAT, there are only some questions showing authenticity. Question 11 needs the candidates to understand an authentic table showing data. This task shows a proper approach to real life performance. Furthermore, questions 5 and 8 require the understanding of situational information. These tasks attempt to provide authentic interaction between the test takers and the test performance. In the listening part, most questions require students to understand the communicative situation in the target world. These questions can be seen to correspond to a real ability. Moreover, the diverse situational and functional topics in the reading test prompt the candidates to be familiar with the characteristics of the test and to be making decisions on real tasks.

 

As I have already mentioned above, authenticity provides an implication for illustrating the relationship between the test and target language performance. Such authenticity also can be associated with Interactiveness. According to Bachman and Palmer (1996, p25), “Interactiveness is the extent and type of involvement of the test taker’s individual characteristics in accomplishing a test task”. In this test, candidates would not use their special knowledge or judgement to find the correct answers in the listening and reading tests, but they use their linguistic knowledge.    

 

4.3 Reliability

 

 For both the listening and reading tests, which consist of multiple choice items, examiners can give reliable marks to the candidates in the KCSAT. The scoring process does not require the rater’s judgement as to how to interpret the candidates’performances. The KCSAT, which can be understood as an objective test, marks the candidates’ responses according to precise answers. Each question requires the candidates to mark only one answer so that they only choose one item from several choices.

 

However, each question is supported by3 different points ranging from 1 to 3. The different points rely on the degree of how difficult the question is. The candidates can be marked by their answers from among 3 different levels of questions. Lastly I could not access the information about the determination how the classification of the questions is assigned according to degree of difficulty.   

 

4.4 Practicality

 

Weir (1993) points out that practicality is the consideration of the amount of time spent, scoring and processing the result as well as the cost and resources of test administration. For the KCSAT, as a national test, there has been consideration about the administration of the test in terms of such diverse aspects as the test places, time, and trained personnel. However the KICE (Korean Institute of Curriculum and Evaluation), which is authorized by the Korean government to design and administer the KCSAT has administered the financial or test conditions for making the test, practical. Likewise the test resources are always developed and designed by educators every year so that the test can be performed according to their schedule. Hence KCSAT has been carried out every year without any problematic practical aspects. 

 

4.5 Impact / Wash-back Effect

 

The effect of tests on teaching and learning is known as a test’s positive/negative wachback. As has been mentioned above, KCSAT as a national test has strongly affected a change in teaching and student learning in the public curriculum inform both positive and negative aspects. From the negative aspect, it is well know that the KCSAT leads many students to ignore their studies in the public curriculum because of the low interrelation between the KCSAT and the current English curriculum. Furthermore, such a situation can also affect the teaching and learning styles. Although the purpose of the KCSAT is to measure the candidates’ integrated linguistic knowledge and communicative English skills, teachers and students have been placing emphasis on the test skills, for example, of how to find the key terms or catch onto hints in the test without improvement of their communicative skills and linguistic knowledge. On the one hand, the first KCSAT test had had only 8 listening questions up until 1994. But as the number of listening questions has been increased from 8 to 17 items in the KCSAT, English teachers have started to teach the listening comprehension as a communicative language skill in their classes and students have tried to improve their listening ability. With slight changes in the KCSAT, students could have had many opportunities to engage in communicative instruction.

 

 5. Conclusion

 

The KCSAT has been administered as a high stakes test for 13 years in Korea. This test has been carried out with some problematic aspects under the administration of KICE, so that it has been modified toward being an appropriate test by providing a high degree of practicality and by developing the content. Moreover this test has brought about a positive impact on teaching and learning in achieving its objectives in the changed form of the test. Lastly this test can be shown to have high reliability in grading the candidates’ results.

 

However, although this reliable test has had functional or conceptual changes so the test still has a few problematic points as I have mentioned previously. The low construct and content validity cannot appropriately measure students’ language ability within the scope of the test objectives. Furthermore, as oral speaking does not exist in this test, it cannot be said to measure the students’ communicative skills. As many educators have suggested, the speaking and writing abilities, as productive skills should be assessed for more precise measurement in foreign language proficiency and achievement. 

 

References

 

Bachman, L. & Palmer, A. (1996) Language testing in practice Oxford: Oxford University Press

 

Davies, A. (1990) Principles of Language Testing. Oxford: Basil Blackwell

 

Hughes, A. (1989) Testing for Language Teachers Cambridge: Cambridge University Press

 

McNamara, T. (2000). Language Testing. New York, Oxford University.

 

 

Weir, C. (1993). Issues in Language testing in “Understanding and developing language tests”, Prentice Hall, P1-29

 

 

Web sites

 

The Korean Institute Curriculum and Evaluation http://www.kice.re.kr/kice/index

 

 

 

 

 

1. General information

 

1.1 Topic and Target Group: This topic of assessment is based on National Information on Tourism as a main topic. This topic can provide diverse information to the students: cultural and national information as well as for travel.

 

 The students, aged 15, are in an Asian middle school, EFL (English as a Foreign Language) class. The students are studying at an intermediate level. They study English four times a week in the public curriculum. I am assuming that they have conversational communicative skills and grammar knowledge.

 

1.2 Scope and Sequence Chart

Overview of the Unit

Link/s to other curriculum areas

General Communicative Goal/s

Cultural Focus

General description of tasks

In this unit, we will prepare a series of classes to let students know how to go travelling overseas, independently.  Students will gain abasic knowledge of travelling in this unit.

Tourism

 

Reading, listening, speaking, and writing

Cross-cultural background

(On the next page)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Lesson

Aims

Language Focus

Multimedia

Materials/Resources

1

World facts

Reading and context comprehension

A projector and computers

World map, handout, pictures

2

Travel

Listening and communication

A CD/Tapes player and a projector

Authentic-related materials, pictures

3

Shopping

Speaking and communication

Computers and a projector

Authentic materials (money, goods, etc.) and pictures of authentic materials

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1.3 Descriptions of procedure (class and assessment): The whole integrated task will be completed in three stages, and the grades of the first and the third stages will be counted in the final grades. 

 

Stage1:

1

England

6

China

2

China

7

America

3

China

8

Australian

4

Australia

9

Australia

5

Australia

10

China

 

 

 

 

 

 

Each

student will have 15 minutes to read two items of tourism reading materials* (Appendix 1) that were given out by the teacher.  After reading the materials, each student will have another 15minutes to complete a gap-filling assessment* (Appendix 2) which aims to help students to understand the content of the reading materials and to assess their reading comprehension ability.  Students have to do the assessment individually, and discussion is not permitted during the assessment procedure.  There are total 10 questions on this test (10 points/question), and the grades of Stage1 weight is 30 % of the final grades. This task is an objective test for which the following list indicates the correct answers used in marking the students’ responses.

 

Stage 2:

More topic-related reading materials will be delivered by the teacher after the assessment.  Students, who have the same reading materials, form a group and a period of 20 minutes preparation time is permitted.  Each group has 15 minutes to do a brief group presentation, and note-taking is compulsory for each student during the process of presentation.  The presentation process and the note-taking content will not be marked.

 

Stage 3:

Instructions: Pretend you are a journalist working for a travel magazine. Your boss has asked you to write an information sheet or handout about travelling to a particular country. You should also provide background information of the chosen travel destination. Reading material sheets about different countries will be provided. You are required to write the information based on the reading material given and your notes will be taken up from the in-class presentation activity.  Further research of the selected country is acceptable but copying directly from reading materials and web pages is forbidden. (The grades for stage 3 are worth 70% of the final grades.)

 

· Make sure your targeted readers will be the general public that who read the magazine.      

· Make sure that your report involves at least 5 pieces of information about the chosen country.

· Length: 250 to 300 words

 

Before you start:

What sort of information do you need to focus on?

Who are you writing for?

 

1.4 Test specifications: This writing test will be performed during the ongoing class process. More than 15 students will take this assessment in the same location. This writing assessment takes about one hour including administration time. For this writing assessment, which is designed as a report which is based on tourism, students should follow the writing format in the specific genre for general writing and description. They can not only use a piece of paper and a pen as the test instrument but also their notes taken from in the class presentation activity. For the structure of the assessment, this writing assessment requires students to perform a situational formal writing task. Lastly they will get their grades for their performance after the scoring procedure which will be explained in the next section.

 

1.5 Scoring scales/criteria

This writing assessment as a form of productive skill assessment is to be judged against rating scales in order to make for a more reliable test and to provide sufficient guidelines to the students. Hence the task rubrics should be designed precisely before students are assessed on their performances. The rubric of this assessment focuses on measuring a stated objective performance and uses a range to rate performance from 0 to 10. Moreover it contains specific performance characteristics arranged in levels indicating the degree to which a standard has been met, as follows below.

 

Score

Task rubrics description

10

1.      Successfully selects the important information from the reading

2.      Accurately presents this information in relation to the relevant information presented in the reading.

3.      Well organized and developed

4.      Displays progression and unity

5.      Occasional structural errors do not influence the correct understanding of the text.

8

1.      Good selection of the important information from the reading

2.      Presenting this information in relation to the relevant information in the reading, but it may have minor inaccuracies.

3.      Well organized and developed

4.      Displays progression and unity, though it may contain occasional redundancies.

5.      Occasional noticeable minor errors in structure.

6

1.      The overall response is definitely oriented to the task, but it conveys vague, unclear, or a somewhat imprecise connection of the points made in the reading.

2.      The response may omit one major point.

3.      Some key points may be incomplete, inaccurate, or imprecise.

4.      May demonstrate inconsistent facility in sentence formation and word choice that may result in lack of clarity.

5.      Error of grammar usage that causes misunderstanding of the text and meanings.

4

1.      The response significantly misrepresents the reading.

2.      Inadequate organization or connection of ideas.

3.      A noticeably inappropriate choice of words or forms.

4.      Obvious errors in sentence structure or usage.

2

1.      The response just provides little or no meaningful content from the reading

2.      Serious and frequent errors in sentence structure or usage

3.      The response completely omits the overall connection between the reading and writing.

0

1.      The response at this level only copies sentences from the reading

2.      The content is not connected with the reading

3.      Wrong grammar structure usage

This table is based on Hughes’s book (1993, p.95)

 

2. Theoretical approach to the characteristics of writing assessment

 

It is well known that reading and grammar translation have been the basis of the main curriculum of the traditional foreign language classes in EFL countries. As to these characteristics of EFL class, many students have faced many problematic points like a lack of ability to write and speak English as the productive skills. Therefore, many EFL classes have emphasised writing instruction in the Communicative Language Teaching (CLT). The writing instruction or task for EFL students does not just bring about the benefit of linguistic improvement but also helps them to extend their cultural or social knowledge by dealing with diverse topics in the context of specific objectives. Actually a writing task demands students to have a variety of linguistic knowledge: a range of vocabulary, grammar and a prior knowledge about writing styles including the difference between a formal and informal writing format. Moreover the writing can provide skills in functional type of writing and, a wide range of academic fields such as business, culture, history and science while adopting a step-by-step approach to teaching how to write in diverse genres. Hence the writing instruction should be involved in a foreign language class.

 

Along with these characteristics of writing instruction, the writing tasks, which can be associated with the class content, also should be performed for achieving other specific goals. As they look at the writing assessment above, for example, students can judge their linguistic ability and general skills such as creating ideas, writing structure and using prior knowledge by using the feedback from writing assessment as a peer assessment. Moreover the writing assessment can provide a reason to read and understand the specific target language used in their writing. Such characteristics of writing assessment can help students and instructors in their teaching and learning of the target language and has its own objectives.

 

 

2.1 Reliability

 

“To be valid a test must provide consistently accurate measurements. It must therefore be reliable… if the scoring of a test is not reliable, then the test results cannot be reliable either.” (Hughes, 1993, p.42)

 

This test will be administered to the same group of individuals on the one occasion so that the rating score should be considered based on clarification provided by the criteria. The reliability of this writing assessment requires a teacher’s judgement as a rater when they do the scoring compared with the scoring of short answers. Hence the teacher has to consider how reliable they are in making judgements to measure students’ writing tasks and how to process the scoring of the tasks in order to have a reliable test. In this writing task, teachers should be trained in how to interpret each criterion. I believe that a better method of or the scoring procedure for a high reliability of results in this writing task is for two raters to do the scoring of the same paper rather than one rater, but I assume that only one teacher would be measuring the students’ result on the writing task in this test. Hence, as I mentioned above, students might expect good results only from their understanding of what they are doing based on the criteria and following the rubric. However, as the scoring of this test relies on only one teacher’s judgement apparently, it will not be easy to achieve high reliability for this writing test. Hence each teacher should understand of the purpose of the test well and form more specific criteria.

 

2.2 Validity

 

According to Hughes (1993, p.27), “a test is said to have face validity if it looks as if it measures what it is supposed to measure”. Based on this definition of face validity, this assessment could be understood as having high face validity..

 

This assessment consists of one formal writing task. The objectives of this assessment are to measure students’ writing ability in terms of spelling accuracy, grammar, and fluency so that students should be able to use their general writing skills adequately. For these objectives relating to linguistic production, this assessment required students to demonstrate their writing ability, grammar knowledge, a suitable range of vocabulary, and fluency and an understanding of writing form. Therefore, their individual reports about the national information which students have chosen, will be assessed in relation to its objectives.    

 

This writing task provides the appropriate contents, structure and genre, which can be see as relevant test objectives. Also it would show content validity as students write the report on national information as part of a travel guide relative to the goals. A report is one form of writing among diverse types of genre. Students might therefore have the opportunity to understand and learn the characteristics of a report through prior activity as a sample of the relevant contents. And also it would be representative of students’ general writing ability. Moreover, before the writing assessment, students can recognize what they would write through the previous reading materials and filling the gap items. Such prior activities or materials can be the test of the blueprint to help guide students in their writing assessment.

 

 

As many have noted, in a writing assessment, construct validity is considered to be the test’s ability to represent the students’ general writing ability. The aim of this assessment is to write a formal report for each student by involving the use of their language skills. This assessment emphasises the overall constructs of writing of structure, grammar and vocabulary. This assessment assesses the writing ability directly so that it seems to be an appropriate assessment to measure students’ writing ability rather than other indirect measurements. Moreover students should understand what they write and the purpose of writing for this genre. Therefore such characteristics of this writing assessment can provide an approach to a proper relationship with the measurement of language writing ability needed in construct validity.    

 

 

2.3 Authenticity

 

A simulated task can be devised, which may enable students to perform by adapting writing skills to real life tasks. All these skills - planning, researching, writing and editing are called into play when students write reports. Hence, there is an integral demand on students’ ability to complete the specific form of writing in a genre by using their target language. Likewise, students demonstrate their linguistic ability to write by using such skills as a range of vocabulary and grammar knowledge. The he topic about national facts might elicit the student’s prior knowledge or familiarity with the language. These characteristics of the task can be interpreted as the relationship between the target language situation and the specific purpose of language use.

 

2.4 Practicality

 

 I can say that the general administration of this writing assessment seems to be quite easy because of not too much need for test administration. A teacher who has the role of test administrator does not need to take into account many considerations but the they will essentially set up the scoring criteria, the process of timing for the procedure and the test instrument. Students will be given enough time and a piece of assessment paper to perform their writing tasks properly. In the process of the assessment, the clear test description will not cause students to be confused. Lastly each explicit criteria can also help students perform their tasks. Therefore the practicality of this assessment can be appropriately demonstrated.

 

2.5 Subjective and Performance assessment

 

 This writing test has implications for the characteristics of subjective assessment. This test would use the teacher’s judgment, in contrast with an objective test. This characteristic can be associated with performance assessment. In this writing task, performance assessment can be interpreted as if the assessment is designed to have students demonstrate their understanding by applying their knowledge and linguistic skills to a target language situation. Students are given some written materials containing diverse national information and they are asked to write a report about one chosen national fact by constructing and using their information and linguistic knowledge. This assessment does not require a simple answer from each student but elicits student’s productive skills.      

 

2.6 Criterion-referenced assessment

 

 According to McNamara (2000), the implication of criterion-referenced assessment is that it occurs when a test is designed to provide a measure of performance that is interpretable in terms of a clearly defined and delimited domain of learning tasks. With his explanation, this writing assessment can be understood as criterion-referenced because this test defines the performance criteria of a writing task in terms of it achieving its objectives and then also measures the student’s performance in individual achievement. Hence students are not given a grade in comparison with other forms of performance but are just given a rating from the specific description of performance at a given rating scale. For this L2 writing assessment, primarily designed to elicit a performance of the students’ writing ability, the scoring criteria should clearly articulate the definitions of the construct of English writing ability and raters should be trained to interpret these criteria. Having received the criteria with the assignment, students are able to write toward their specific goals.

 

 

3.  Glossary of 10 relevant assessment terms 

 

1. Reliability (Bachman, L, E. & Palmer, A, S., 1996, P.19): Not only to certain topic, every test should have/show a consistent result. It means that a test should give a very similar result when it is marked by the same group at a different time or when the test is marked by a different group. For example, there should be guidelines for the marking process and teachers should mark the test in a similar environment/atmosphere.

 

2. Content Validity (McNamara, T., 1996, P.16): It is the degree in which, a test measures what it is supposed to measure. The items on the test have to reflect what has been taught. It should cover a series of language skills, vocabulary and knowledge that are related to the chosen topic and perhaps, related to student learning plan.

 

3. Construct Validity (Messick, S., 1996, p.11): It is an evidence of validity gained by showing the relationship between a theoretical construct and tests that propose to measure the construct. In other words, it is the degree to which a test measures an intended hypothetical construct.

 

4. Face Validity (Hughes, A., 1989, p.27): It is a property of a test intended to measure something. The test is said to have face validity if it “looks like” it is going to measure what it is supposed to measure.

 

5. Authenticity (Douglas, D. 2000, P.16): The materials that the test designers design to apply in the assessment sourced from the test-takers’ related real life, their learning goals and the purposes that most applicants want to take the test for.

 

6. Interactiveness (Bachman, L, E. & Palmer, A, S., 1996, P.25): The range and types of the knowledge and skills that are involved in an assessment are the test-takers’ areas of language knowledge, strategic competence, or meta-cognitive strategies.

 

7. Integrative tests (Hughes, A., 1989, p.16): The assessments that have combined the various domains of language knowledge and skills with the test-takers’ language interpreting and production ability of the language knowledge background.

 

8. Formative assessment (Bachman, L, E. & Palmer, A, S., 1996, P.98): Formative assessment is a part of instructional process. The purpose is to improve student learning and to determine success or to what extent the program/project/course has met its goals. For example, an In-Classroom test is one of the most common formative assessment techniques.

 

9. Subjective assessment (Hughes, A., 1989, p.19): This is more difficult because rater has to use their own judgment. This assessment is a form of questioning which may have more than one current answer. E.g.] extended-response questions and essay.

10. Performance assessment (McNamara, T., 2000, P.5): This is a test that requires students to demonstrate their knowledge and skills by using their productive skills rather than choosing from several multiple-choice options.

 

4. Reflection on the whole planning of assessment process

Difficulties in the procedure of designing process

This assignment is not just to design the specific assessment but also requires an overall understanding of testing knowledge. I think that designing assessment seemed to be quite easy except for the application of the theoretical background in the assessment that has to be considered. The vague conception of the each bit of terminology about assessment sometimes led to many problematic points in this assessment. Such aspects have sometimes influenced the group discussion to complete the pair work in this assignment. Moreover some glossaries did not give a good definition of the precise characteristics used in this assignment so that we spent much time on conceptualising each term in the glossary and discussing about how they can be interpreted.  These difficulties however, provided us with an opportunity to have a critical approach to this assessment and to understand what we need for completing the assignment. 

Consideration of the application of the writing assessment in a Korean English class

Because there are the different points of view about English writing tests in Korea , I have considered whether this writing assessment can be performed in a Korean English class. Although the English pedagogies have shifted from traditional instruction to Communicative Language Teaching (CLT), thus far, foreign language instruction involving writing has not been performed much in the English curriculum. There are many problematic points when the writing instruction or tests are delivered in a Korean English class. Because grammar or reading comprehension has mainly occupied English instruction, students do not have the writing skills and they are not familiar with writing instruction. As a result, therefore it is not easy for students to perform a writing test. However the writing instruction is necessary to improve the target language skills even though that is a big consideration. Hence if such writing instruction was modified within the Korean English curriculum, it would be able to be carried out in CLT instruction.

Unit evaluation

Lastly, I would like to talk about the effects of this class on my studying. I have not always been able to conceptualize the quality of overall language testing so that I have sometimes had a few difficulties to understand what something has meant or potential aspects of testing. Although now, I cannot say that all notions or the rationale of language testing can be explained by me, I recognize that I have learned the necessary knowledge for teaching English. Lastly to be a good teacher needs not only the appropriate teaching method or teacher’s linguistic knowledge but also the ability to create the ideal test to be able to measure students’ performance from the point of view of the diverse theoretical aspects.   

 

References

 

Bachman, L. & Palmer, A. (1996) Language testing in practice Oxford: Oxford University Press

 

Davies, A. (1990) Principles of Language Testing Oxford: Basil Blackwell

 

Douglas, D. (2000) Assessing languages for specific purposes Cambridge: Cambridge University Press

 

Hughes, A. (1989) Testing for Language Teachers Cambridge: Cambridge University Press

 

Messick, S. (1996). Validity and Washback in Language Testing, language Testing, 13(3)

 

McNamara, T. (1996) Measuring second language performance London; New York: Longman.

 

McNamara, T. (2000) Language Testing London; New York: Longman.

 

Weir, C. (1993) Understanding and developing language tests Prentice Hal

 

 

 

 

Appendix 2

Blanked Gap-Filling

 

*Fill in the blank to complete the national information

 

1. London is the capital of _______________  

2.      ________ has this                          electric plug

 

3.  __________________  has over 1,000,000,000 of population.

 

4. 61 is the    _______________  country dialling code.

5. Kangaroo is    an ___________ animal.

 

6. 70% of the population speak Mandarin in __________

 

7. _______________  is a multi-cultural country, so it’s not unusual to walk down a city street and hear people speaking Italian, Greek, Lebanese, Vietnamese or Arabic as their first language.

8. This                          is a map of_______________

 

 

9. Canberra is the capital of ____________________

 

10. Which country is the biggest among Australia, China, and England?

______________