University Teachers' Conception of Assessment: A Structural Equation Modeling Approach
Marjan Moiinvaziri*
Department of English Language, Sirjan Branch, Islamic Azad University, Sirjan, Iran
Abstract
Assessment is considered as one of the essential factors influencing students' development and learning approach which requires special consideration. This study has tried to investigate university teachers’ conceptions of assessment using short version of "Teachers' Conceptions of Assessment (TCoA)" inventory (Brown 2006). In addition, the applicability of the teachers’ conception of assessment models, presented by Brown (2008) and Brown & Remesal (2012) in Iranian context has been explored. 147 university teachers filled out the questionnaire. The results showed that most teachers believed in the use of assessment for the purpose of teaching and learning improvement. Teachers showed no difference of opinion with regard to their gender but there was a correlation between years of teaching experience and teachers’ conception of assessment. By the use of Structural Equation Modeling (SEM), it was concluded that Brown’s model of teachers’ assessment beliefs does not fit the Iranian context.
Keywords
Assessment Beliefs, University Teachers, Model Comparison, Structural Equation Modeling (SEM)
Received: March 25, 2015 / Accepted: April 15, 2015 / Published online: May 5, 2015
@ 2015 The Authors. Published by American Institute of Science. This Open Access article is under the CC BY-NC license. http://creativecommons.org/licenses/by-nc/4.0/
1. Introduction
Assessment has always been the inseparable part of education. From first grade in primary school to higher levels of university education, students are regularly assessed by their teachers. The most used instrument for this aim has been tests. Marsoand Pigge (1988) estimated that 54 teacher-made tests are used in a typical classroom per year. Therefore, that a typical teacher would spend between one-third and one-half of the class time on a kind of measurement activity is of no surprise (Stiggins, 1994).Assessment has a great and powerful effect on students learning as Scouller (1999) and many other scholars have repeatedly mentioned. Fullan (2001) defines assessment literacy as the teacher’s capacity to analyze the students’ performances and the quality of their work through analysis of their achievement scores and gathering of evidence.
Assessing students has many purposes like providing summaries of learning, providing information on learning progress, diagnosing specific strengths and weaknesses in an individual’s learning and motivating further learning. Teachers routinely try to apply new and improved approaches toward their assessment which would have more validity. Teachers’ use of different methods to assess their students’ performances is based on their beliefs about theories of language, teaching, learning and assessment. Therefore, giving a special attention to teachers’ beliefs in the way of their professional development would seem of great importance (Borko et al., 1997). As a result many researchers have emphasized the need for further exploration of the relation between teachers’ beliefs and their assessment practices (Adams & Hsu, 1998; Winterbottom et al., 2008). This study has tried to examine the conception of assessment among the university teachers of different fields in Sirjan, Iran using Teachers’ Conception of Assessment (TCoA) inventory by Brown (2002, 2006). It is hoped that having a thorough understanding of teachers beliefs of assessment, can be of great help to teacher trainers and curriculum designers in fostering the necessary changes in assessment beliefs and practices in already implemented assessment system.
2. Background
Erwin (1991) defines assessment as "the process of defining, analyzing, interpreting, and using information to increase students’ learning and development" (p.15). Classroom assessment is affected by many factors among which teachers’ belief toward assessment and its goals is one of the most important. Teachers construct most of the test used in the classroom and they make decisions regarding students learning, progress, problems and pass or fail based on their own assessment. Fenton (1996) defines assessment as "the collection of relevant information that may be relied on for making decisions. Evaluation is the application of a standard and a decision-making system to assessing data to produce judgments about the amount and adequacy of the learning that has taken place" (p. 20).Therefore, assessment can have a major effect on students’ facilitation or hindrance of learning (Black & Williams, 1998). There have been many different purposes mentioned for the assessment but four of major conceptions were emphasized by different scholars (Nisbet & Warren, 1999; Shohamy, 2001; Brown, 2003, 2006, 2008) include:
1-Assessment improves learning and teaching. (Improvement)
2-Assessment makes students accountable for learning. (Students’ accountability)
3-Assessment demonstrates the quality schools and teachers. (School accountability)
4-Assessment should be rejected because it invalid, irrelevant, and negative. (Irrelevance)
The believers of the first conception see assessment as a tool to diagnose students’ learning problems. Therefore, teachers should use different methods of assessing students to get a full idea of what they have learned and what their problems are. In addition, teachers can use the assessment results to evaluate and improve their own practice as well. By the students’ accountability, it is meant that students themselves are responsible for their own learning. "Thus student accountability is largely about high stakes consequences such as graduation or selection or being publicly reported on as earning a certain grade, level, or score" (Brown, 2002, p.41). As Musial et al. (2009) have suggested grading in this conception does not consider what students have achieved and how much they have progressed in a learning continuum, but it is just concerned with the students’ position in relation to other students of the same age. The teachers and school accountability means the use of assessment to see how well teachers or schools are doing in relation to the established standards. This conception can be two-folded as Brown (2002) mentions: "one rationale emphasizes demonstrating publicly that schools and teachers deliver quality instruction, and the second emphasizes improving the quality of instruction" (p. 33).This concept is referred to as the requirement of summative assessment. The last conception assessment as irrelevance as Brown (2008) states "assessment, usually understood as a formal evaluation of student performance, has no legitimate place within teaching and learning (p.3). In this view formal evaluation is seen as something which will have negative effects on education, teachers and learners. The formal assessment can be unfair and neglected to the students’ abilities and cause them anxiety. It can also have a diverse effect on teachers’ autonomy and professionalism and distract them from their aim of students’ learning (Brown, 2002).
There have been a series of studies emphasizing the fundamental relation between teachers’ conception of assessment and learning and teaching improvement (Black & Wiliam, 1998; Delandshere & Jones, 1999; Popham, 2008) and recently there have been increasing studies investigating teachers’ conception and beliefs toward assessment and its relation to their practice. Pelly & Allison (2000) explored primary school teachers’ perspective on assessment and its impact on their teaching in Singapore. The results showed that teachers believed in the use of formal test along with other ways of assessment and they had different views of the efficacy of current tests. In another study Lu (2003) used interviews and classroom observations to examine the assessment beliefs and practices of two university English instructors in Taiwan. The teachers had a series of beliefs that guided their assessment practices. Cheng, Rogers and Hu (2004) compared the assessment practices of teachers from Canada, China and Hong Kong. Teachers in this study reported using a range of assessment procedures examining students’ language abilities. One of the leading researchers in this issue, Brown (2002) developed a Teacher’s Conceptions of Assessment (TCoA) inventory based on the four mentioned conception of assessment. He used the inventory with primary and secondary teachers in different places like New Zealand (Brown, 2004, 2006, 2008), Queensland (Brown, Lake, & Matters, 2011), Hong Kong (Brown, Kennedy, Fok, Chan, & Yu, 2009), and Cyprus (Brown & Michaelides, 2011) and the teachers reported improvement of learning and teaching as their dominant purpose of assessment.
There have not been many studies investigating teachers’ conceptions of assessment and their practices in Iran. Abbasnasab (2011) gave an open-ended questionnaire to 35 EFL teachers to examine their practices of assessment and develop new perceptions. Teachers’ views were based on their knowledge of language teaching and learning, contextual background and socio-political factors. Pishghadam and Shayesteh (2012) surveyed 103 EFL teachers’ beliefs toward assessment using Brown’s TCoA inventory. Teachers mostly believed in assessment as the students’ accountability. What can be concluded from these studies is that teachers’ conception of assessment and their practices have not been given much attention in Iran and the two mentioned studies were only limited to EFL teachers’ beliefs. Therefore, this study plans to do a thorough investigation among university teachers from different fields of study in this regard.
3. Purpose of the Study
This study intended to explore the university teachers’ conception of assessment. It has also aimed to investigate the relation between teachers’ assessment beliefs and their teaching experience and gender. Furthermore, another purpose was to examine the applicability of the model presented based on the TCoA inventory in Iranian context. Besides, as teachers' experience and gender could exert an influence over their beliefs towards assessment, these factors were also explored. This study has tried to answer the following questions:
1- What is university teachers’ conception of assessment?
2- What are assessment practices of university teachers?
3- Is there any difference between factors induced from Iranian university teachers’ conceptions and the original study?
4- Is the original model of teachers’ conception of assessment applicable in Iranian context?
5- Is there any difference between male and female teachers’ conception of assessment?
6- Is there any difference between years of experience and teachers’ conception of assessment?
4. Significance of Study
There is no proper supervision over the assessment in different fields in Iranian Universities and assessment in all universities except Payamenoor University, whose exams are held nationally, is done by teachers. Some teachers do not pass any courses on test making and assessment during their previous studies. Having an understanding of teachers’ assessment beliefs and practices can be of great help for their professional development. In addition, the obtained results including extensive and detailed data analysis can make the university authorities, teacher trainers, and curriculum designers aware of the university teachers’ assessment problems, if any, and by making changes in the designing training courses and presenting in-service programs and workshops make the necessary and positive changes in order to have a fair and valid assessment in not only Iranian universities, but also in other countries experiencing the same situation.
5. Method
5.1. Participants
The participants of this study were 147 university teachers (86 males, 61 females) teaching in different universities of Sirjan (Islamic Azad University, Industrial University of Sirjan and Payamenoor University). Teachers were selected randomly from different fields of study. The reason for selecting university teachers instead of other levels of education like high school was because of the fact that other than lack of enough given attention to assessment in this level of education, university teachers have much more freedom in their use of different methods of assessment than teachers of other levels in Iran. The ministry of education has a very intense control over the ways of assessing students in their school period; therefore, teachers are not free to apply whatever practice that they consider useful. On the other hand, university teachers have extensive freedom in application of assessment practices that they consider beneficial. In addition to all the participants answering a questionnaire, 20 of them were interviewed.
5.2. Instruments
5.2.1. Interview
The instrument used for this study included a semi-structured interview which contained 4 questions (see appendix) based on the previous literature as a guideline for conducting the interviews. The general structure of the interview was based on Lynch’s (1996) interview guide:
Casual, put-the-interviewee-at-ease questions/ comments: i.e. the researcher tells them a bit about herself and explains the purpose of the interview.
General questions: The researcher asks the participants about their general opinions about assessment.
Specific questions: The researcher goes over the questions in the interview schedule.
Closing questions: The researcher asks the participants about how the before mentioned factors could be minimized.
Casual, wind-down questions/comments: The researcher expresses appreciation of their participation (p.132).
5.2.2. Questionnaire
The instrument used was the abridged form of Teachers’ Conception of Assessment Inventory (TCoA-IIIA) (Brown, 2006). It contains 27 Likert scale items including two negative options (i.e. mostly disagree and strongly disagree) and four positive options (i.e. slightly, moderately, mostly and strongly agree). The questionnaire examines the teachers’ conception of assessment including improvement, student accountability, school accountability and irrelevance which were mentioned earlier in the background. The concept of improvement is divided into four subfactors (assessment: describes students learning, is valid, improves teaching and improves student learning) each of which dedicates three questions of the questionnaire to itself. Three questions are assigned to the concepts of student accountability and school accountability each. The last concept, irrelevance, contains three subfactors (assessment: is inaccurate, is ignored, is bad) each of which is comprised of three questions as well. An additional part eliciting participants’ demographic information like years of teaching experience and gender was added. The questionnaire was translated into Persian. To make the answering process easier and more understandable to the participants, the number of available options was reduced from 6 to 5 (from strongly agree to strongly disagree). The alpha coefficient of reliability was calculated for the questionnaire as 0.856.
6. Procedures
6.1. Data Collection
The current study has employed a mixed method design which includes both qualitative and quantitative research methods. Such a method integrates both approaches to provide a much more detailed and comprehensive picture of the topic being investigated. First, 20 university teachers from different departments of universities of Sirjan (State University, Islamic Azad University and Payamenoor University) were interviewed. The number of teachers being interviewed was limited to 20 people as at this point no new information was attained and data saturation was reached. The data gathered included their definition of assessment, their assessment practices and their view toward advantages and disadvantages of assessment. The interview was done in Persian because the interviewees were not proficient English speakers and could not understand or speak the language. Throughout the interview their voices were recorded using a sound recorder. Then the recorded sounds were transcribed and decoded to obtain an overall idea of the participants’ conception with regard to the mentioned issues. Then the above-mentioned questionnaire was distributed among the participants during a three-week period because university teachers were mostly busy or had classes at different hours and days in the mentioned universities.
6.2. Data Analysis
After transcribing the data from interviews, it was codified and different categories for teachers’ assessment practices and conceptions were obtained in order to get to a general idea in this regard. To explore the factor structure of the questionnaire items, both an exploratory and confirmatory factor analyses were performed using SPSS software. Furthermore, in order to check for the fit characteristics of data to the original model developed by Brown (2006), AMOS 18 (Analysis Moments of Structures) was conducted. Other than that, descriptive statistics like the independent sample t-test and one way ANOVA were also performed to compare the results in relation to gender and years of teaching experience.
7. Results and Discussions
7.1. Qualitative Analysis
Twenty university teachers (7 males, 13 females) from different universities of Sirjan were interviewed. The first question asked was their definition of assessment. Four of them could not give any definition for assessment and the others mostly defined it as below, among which the first definition was the most frequent:
(1) Investigating the amount of learning,
(2) Investigating the strength and weaknesses of students,
(3) Investigating the students learning using qualitative and quantitative methods,
(4) An instrument for investigating educational progress,
(5) Investigating the proportion of reaching goals.
The second question asked about the methods teachers used for assessing their students. Teachers’ methods of assessment were mostly the same and included:
a) Final and midterm exams
b) In class questions and answers
c) In class activities
d) In class discussions and presentation
e) Quizzes
f) Homework assignments
Teachers explained that they mostly used formal evaluation activities like final and midterm exams or in class questions and answers. Informal evaluation activities were not used or at least not frequent among teachers’ assessment methods. Most teachers considered the large amount of students, lack of time and their lack of usefulness as the reasons for not using informal evaluation activities, although the methods used for different fields of study were somewhat different.
Methods | Final exam | Midterm exam | Question and answer | Class activities | Quizzes | Homework assignment | Class discussion and presentation |
Frequency of use | 20 | 12 | 14 | 16 | 10 | 8 | 12 |
The third and fourth questions of the interview were concerned with the advantages and disadvantages of assessing students. Among the advantages were:
a) Teachers can understand the amount of knowledge that they were able to transfer
b) Students can become aware of their learning problems
c) Students become motivated to study more
d) Teachers can judge who should pass or fail the course
e) Teachers can improve their methods of teaching
f) It does not have any benefits for teachers
Most teachers believed that assessment could help teachers determine the students’ amount of learning and four of them taught that it has no use for teachers.
The disadvantages of assessment as mentioned by the university teachers included:
a) Nonstandard exams could be problematic for students
b) It makes students to rely only on their textbooks and teachers’ explanations in order to pass the exams
c) It does not have any disadvantages
d) It can be unfair to some students
e) It causes students to become nervous
f) It is time-consuming
Most teachers referred to exams as the problematic part of the assessment and explained that solid reliance on them could be inequitable and could not determine the students’ total ability and knowledge. They explained that some of the reasons might include personal problems of some students on that particular occasion or nonstandard tests. In addition, it was suggested that teachers should use different methods of assessing students and do not just rely on students’ performances in their final exam. Only two of the interviewees believed that assessment has no disadvantages.
Although in this part, it has been attempted to provide the answer to the first and second research questions, teachers’ conception of assessment is further investigated in the following section.
7.2. Quantitative Analysis
The total of 147 teachers (61 males and 86 females) filled out the TCoA-IIIA questionnaire. A Structural Equation Modeling (SEM) approach was used to evaluate the teachers’ responses.
7.2.1. Confirmatory Factor Analysis
The acquired results showed a difference between the categorization of items in the present and original study as it is shown in table 2.
CoA-IIIA | statement | Original CoA | Present study |
3 | Assessment is a way to determine how much students have learned from teaching. | Improve | Improve |
4 | Assessment provides feedback to students about their performance. | Improve | Student accountability |
5 | Assessment is integrated with teaching practice. | Improve | Irrelevant |
6 | Assessment results are trustworthy. | Improve | Improve |
12 | Assessment establishes what students have learned. | Improve | Student accountability |
13 | Assessment feedbacks to students their learning needs. | Improve | Student accountability |
14 | Assessment information modifies ongoing teaching of students. | Improve | Irrelevant |
15 | Assessment results are consistent. | Improve | Improve |
21 | Assessment measures students’ higher order thinking skills. | Improve | Improve |
22 | Assessment helps students improve their learning. | Improve | Improve |
23 | Assessment allows different students to get different instruction. | Improve | Improve |
24 | Assessment results can be depended on. | Improve | Improve |
7 | Assessment forces teachers to teach in a way against their beliefs. | Irrelevant | School accountability |
8 | Teachers conduct assessments but make little use of the results. | Irrelevant | School accountability |
9 | Assessment results should be treated cautiously because of measurement error. | Irrelevant | School accountability |
16 | Assessment is unfair to students. | Irrelevant | Student accountability |
17 | Assessment results are filed & ignored. | Irrelevant | School accountability |
18 | Teachers should take into account the error and imprecision in all assessment. | Irrelevant | School accountability |
25 | Assessment interferes with teaching. | Irrelevant | Student accountability |
26 | Assessment has little impact on teaching. | Irrelevant | Irrelevant |
27 | Assessment is an imprecise process. | Irrelevant | Irrelevant |
1 | Assessment provides information on how well schools are doing | School accountability | Irrelevant |
10 | Assessment is an accurate indicator of a school’s quality. | School accountability | Irrelevant |
19 | Assessment is a good way to evaluate a school. | School accountability | Improve |
2 | Assessment places students into categories. | Student accountability | Improve |
11 | Assessment is assigning a grade or level to student work. | Student accountability | Improve |
20 | Assessment determines if students meet qualifications standards. | Student accountability | Improve |
In order to check the fit of characteristics of the first order factors of the current study to the original study (Brown, 2002, 2008), model A was developed using Confirmatory Factor Analysis (CFA). In the second model based on Brown and Remesal (2012), the major factors increased to five, retaining three of the original model’s main factors and all of the 27 items. Using another confirmatory factor analysis, model B was presented. The obtained results based on these models did not report a reasonable fit of characteristics with the original model. Figure 1 shows the standardized estimate of values for the factors of the scale based on the data of this study.
As it is shown in the figure, statistical indices have improved from model A to B. The amount of X2has changed from 553 in model A to 507 in model B. As this index gets closer to zero the model would become more suitable. Considering the models’ comparative fit index (CFI) statistic, it has increased from 0.743 in model A to 0.784 in model B. As this index gets closer to one, the model would become more acceptable. In addition, the index of root mean square errors of approximation (RMSEA) has also improved from 0.071 in model A to 0.066 in model B, as the decrease in this index would enhance the model. Nevertheless, considering the zero amount of p, both models would be considered as unsuitable. In a proper model, the amount of p should exceed 0.05. Therefore, in the next part; the model based on the exploratory factor analysis is presented.
7.2.2. Exploratory Factor Analysis
Exploratory factor analysis resulted in eight factor solutions. It was decided that eighth factor be omitted as it was loaded only on one factor. Table 3 shows the list of acquired factors.
No Item | F1 | F2 | F3 | F4 | F5 | F6 | F7 | F8 |
Factor1 alfa:0.771 | ||||||||
q21 | 0.730 | |||||||
q20 | 0.687 | |||||||
q19 | 0.667 | |||||||
q24 | 0.585 | |||||||
q23 | 0.541 | |||||||
Factor2 alfa:0.757 | ||||||||
q12 | 0.698 | |||||||
q22 | 0.618 | |||||||
q11 | 0.550 | |||||||
q6 | 0.469 | |||||||
q4 | 0.431 | |||||||
q13 | 0.421 | |||||||
Factor3 alfa:0.643 | ||||||||
q8 | 0.736 | |||||||
q17 | 0.701 | |||||||
q16 | 0.549 | |||||||
q7 | 0.429 | |||||||
Factor 4 alfa:0.654 | ||||||||
q26 | 0.757 | |||||||
q5 | 0.712 | |||||||
q27 | 0.542 | |||||||
q14 | 0.505 | |||||||
Factor 5 alfa:0.559 | ||||||||
q1 | 0.736 | |||||||
q10 | 0.661 | |||||||
q25 | 0.469 | |||||||
Factor 6 alfa:0.394 | ||||||||
q2 | 0.805 | |||||||
q3 | 0.424 | |||||||
Factor 7 alfa:0.436 | ||||||||
q18 | 0.717 | |||||||
q9 | 0.649 | |||||||
Factor 8 | ||||||||
q15 | 0.745 |
Since none of the factors was compatible with the factors of the original study, a new categorization was proposed for the results (table 4).
Factor | Item numbers |
1- School and student evaluation | 21,20,19,24,23 |
2- Helping learning | 12,22,11,6,4,13 |
3- Ignorance and unfairness | 8,17,16,7 |
4- Teaching relevance | 26,5,27,14 |
5- School control | 1,10,25 |
6- Student development | 2,3 |
7- Inaccuracy | 18,9 |
The new categories suggest a difference of opinion between the Iranian teachers and the teachers of the original study. Thus, the third research question would require a positive response. Based on the outcomes of the exploratory factor analysis, model C is presented as follow:
It has to be mentioned that questions 15, 9 and 18 have 0.00, 0.09 and 0.12 respectively, as their influential coefficient. Considering these nonstandard amounts and also the zero amount of p coefficient, these three variables were omitted in model C in an attempt to obtain a more suitable model. However, after making these transformations, the model is not still considered a suitable one. The reason for the questions 9 and 18 low level of coefficient may be explained as the result of teachers’ lack of familiarity with the concepts reliability and validity in assessment. In other words, they did not have a clear idea of what is meant by measurement error and errors in assessment. For the low amount of coefficient in question 15, it was concluded that the difference of connotations in words used in English and Persian may have caused confusion in meaning of the sentence.
To answer the fourth research question, considering the results of Structural Equation Modeling (SEM) and the obtained models, it can be concluded that the TCoA-III inventory cannot be considered a good representation of the Iranian university teachers’ conception of assessment. The reason may be explained by the numerous differences between the context of the original study in New Zealand and the present study in Iran. These differences may include distinctions in the system of education, the teacher training programs, or even the methods of assessment used.
8. Teachers’ Conceptions and Gender
As it is shown in table 5, the T-test results present no difference between males and females opinions toward each factor.
Variable | Gender | N | Mean | Std. Deviation | T | Sig |
Improvement | Male | 86 | 41.9 | 6.2 | -0.535 | 0.594 |
Female | 61 | 42.49 | 5.9 | |||
Irrelevance | Male | 86 | 27.3 | 3.7 | 1.760 | 0.80 |
Female | 61 | 26.1 | 4.2 | |||
School accountability | Male | 86 | 9.7 | 2.2 | -1.603 | 0.11 |
Female | 61 | 10.3 | 2.1 | |||
Student accountability | Male | 86 | 10.1 | 1.9 | -0.630 | 0.529 |
Female | 61 | 10.3 | 1.9 |
The results also maintain that the subcategory "assessment as improvement of teaching and learning" has the highest mean score of the four subcategories, while "assessment as irrelevant in teaching" is in the second place. The other two subcategories "assessment as making school and teachers accountable for their effectiveness" and "assessment as making students accountable for their learning" with very close mean scores could be categorized as the third place.
These outcomes may have originated from teachers’ beliefs in the use of assessment for the improvement of their students’ learning and their own teaching, as it was shown in their interviews as well. During the interviews, most teachers also stated their beliefs in the unfairness and inaccurateness of the exams. In addition, some of them stated that they make little use of the exam results. These kinds of beliefs may have resulted in having assessment as irrelevance in the second place.
The system of education in Iran is mostly a teacher-centered system in which teachers have the most authority as well as the most responsibility (Dolati & Seliman, 2011; Zohrabi et al., 2012). Classes are usually held in the form of teacher having lectures and students are mostly passive. In universities, the authorities do not usually have any direct supervision over each class, its students or the reliability and validity of the taken exams. Therefore, the assessment results of the students may not be a good indicator of the level or quality of the university. These factors may explain the reason for the dedication of low mean scores to the last two subcategories.
The same statistical operations were applied on the seven categories obtained from exploratory factor analysis and no difference was detected among males and females beliefs in those categories as well. This means that the answer to the fifth research question would be negative.
9. Teachers’ Conceptions and Teaching Experience
In order to investigate the relation between teaching experience and the four general factors of TCoA-III inventory, Pearson correlation coefficient was used. The results showed a low positive correlation of r= 0.162 and 0.168 between teaching experience and school and student accountability factors. The level of significance Sig=0.050 and 0.042 shows that this correlation is meaningful. As the teachers’ years of experience increases, they would have higher values for the conception of school and student accountability. The reason may originate from a change in teachers’ views as they acquire more experience through the years. As mentioned above, the system of education in Iran is mostly teacher-centered. Teachers are the authority in the class and most teachers use lectures in order to teach, as it does not need any special skills. As the teachers gain experience, they may understand the deficiencies of this method of teaching and move toward a more student-centered class. In addition, they may further realize the important role of assessment in improvement of their own practices and eventually schools’ performances. However, there is no meaningful correlation with regard to teaching experience and the first two factors.
Sig | Pearson Correlation | N | |
0.065 | 0.152 | 147 | Improvement |
0.306 | -0.085 | 147 | Irrelevance |
0.050 | 0.162 | 147 | School accountability |
0.042 | 0.168 | 147 | Student accountability |
The following table shows the relation between the teaching experience and the obtained factors of the exploratory factor analysis using Pearson correlation coefficient. Based on the results there was a weak correlation r= 0.203, 0.196 and – 0.170 between teaching experience and the factors 4 (teaching relevance), 6 (student development) and 7 (inaccuracy). Since the levels of significances were Sig= 0.014, 0.018 and 0.038 and meaningful, this hypothesis is confirmed. As it was mentioned before, on the one hand teachers’ gain of knowledge over the years may cause a change in their views and practices and they may become more aware of the importance of assessment on their teaching and students’ development; on the other hand, they become more cautious with regard to inaccuracies in measurement and the decisions that they make. However, there was no significant correlation between teaching experience and other factors. Therefore, as an answer to the research question 6, there might be a relation between teachers’ years of teaching and their conception of assessment.
Sig | Pearson Correlation | N | |
0.120 | 0.129 | 147 | F1 |
0.078 | 0.146 | 147 | F2 |
0.087 | -0.142 | 147 | F3 |
0.014 | 0.203 | 147 | F4 |
0.289 | 0.088 | 147 | F5 |
0.018 | 0.196 | 147 | F6 |
0.038 | -0.171 | 147 | F7 |
10. Concluding Remarks
Based on the findings of the study, it might be concluded that university teachers were not fully aware of different issues in assessment. Since teachers' beliefs towards assessment and their methods of assessment could have a profound effect on their professional development as well as students' learning (Tillema, 2009), acquiring further knowledge and training in this regard seems necessary.
Although this study has had its limitations like the conducting of the study in only one city and therefore the low number of participants, but it has shown that the inventory presented by Brown (2006) may not be a good representative of Iranian university teachers’ conception of assessment. It is suggested that by gathering a more extensive sample from Iranian teachers around the country an Inventory specialized for Iranian teachers be created to have a more thorough and exact view of their conception of assessment. In this way, teacher-trainers and curriculum designers can make the necessary changes in the methods and materials used to train teachers including more theoretical and practical knowledge of assessment. It is hoped that further studies in this field could bring about a major change in the progression and development of teaching and assessment in Iranian universities as well as other countries', experiencing the same hurdles.
Appendix A: Interview Questions
1- What is your definition of assessment?
2- What methods of assessment do you use in your classes?
3- What are the advantages of assessment in your opinion?
4- What are the disadvantages of assessment in your opinion?
Appendix B: The Questionnaire
Please provide the following demographic information.
A) What is your sex?
Female
Male
B) Select the appropriate age range.
21-25
26-33
34-42
43 and above
C) For how many years have you taught?
Please give your rating for each of the following 27 statements based on YOUR opinion about assessment. Indicate how much you actually agree or disagree with each statement.
Conceptions of Assessment | strongly disagree | disagree | No idea | agree | strongly agree |
1. Assessment provides information on how well schools are doing | |||||
2. Assessment places students into categories | |||||
3. Assessment is a way to determine how much students have learned from teaching | |||||
4. Assessment provides feedback to students about their performance | |||||
5. Assessment is integrated with teaching practice | |||||
6. Assessment results are trustworthy | |||||
7. Assessment forces teachers to teach in a way that is contradictory to their beliefs | |||||
8. Teachers conduct assessments but make little use of the results | |||||
9. Assessment results should be treated cautiously because of measurement error | |||||
10. Assessment is an accurate indicator of a school’s quality | |||||
11. Assessment is assigning a grade or level to student work | |||||
12. Assessment establishes what students have learned | |||||
13. Assessment informs students of their learning needs | |||||
14. Assessment information modifies ongoing teaching students | |||||
15. Assessment results are consistent | |||||
16. Assessment is unfair to students | |||||
17. Assessment results are filed & ignored | |||||
18. Teachers should take into account the error and imprecision in all assessment | |||||
19. Assessment is a good way to evaluate a school | |||||
20. Assessment determines if students meet qualifications standards | |||||
21. Assessment measures students’ higher order thinking skills | |||||
22. Assessment helps students improve their learning | |||||
23. Assessment allows different students to get different instruction | |||||
24. Assessment results can be depended on | |||||
25. Assessment interferes with teaching | |||||
26. Assessment has little impact on teaching | |||||
27. Assessment is an imprecise process |
References