Article Information

Authors:
Yael Shalem1
Ingrid Sapire1
Belinda Huntley2

Affiliations:
1School of Education, University of the Witwatersrand, South Africa

2Mathematics Department, St John’s College, South Africa

Correspondence to:
Ingrid Sapire

Postal address:
Division of Curriculum, School of Education, Private Bag 3, WITS 2050, South Africa

Dates:
Received: 07 Sept. 2012
Accepted: 22 Mar. 2013
Published: 22 May 2013

How to cite this article:
Shalem, Y., Sapire, I., & Huntley, B. (2013). Mapping onto the mathematics curriculum – an opportunity for teachers to learn. Pythagoras, 34(1), Art. #195, 10 pages. http://dx.doi.org/10.4102/
pythagoras.v34i1.195

Copyright Notice:
© 2013. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Mapping onto the mathematics curriculum – an opportunity for teachers to learn
In This Original Research...
Open Access
Abstract
Introduction
The Data Informed Practice Improvement Project
   • Curriculum standards – teacher knowledge and interpretation
      • Curriculum mapping
      • Accounting – an opportunity to learn from the examined curriculum
      • The project process
      • The curriculum mapping activity
      • Sample of items
      • Data analysis
      • Validity and reliability
      • Ethical considerations
Findings
   • Overall alignment results
   • Increased confidence and improved judgement
   • Mapping ICAS items and own test items
   • Gaps between the intended and the enacted curriculum
Conclusion
Acknowledgements
   • Competing interests
   • Authors’ contributions
References
Footnotes
Abstract

Curriculum mapping is a common practice amongst test designers but not amongst teachers. As part of the Data Informed Practice Improvement Project’s (DIPIP) attempt to de-fetishise accountability assessment, teachers were tasked to investigate the alignment of a large-scale assessment with the South African mathematics curriculum. About 50 mathematics teachers from Grade 3–9 worked in groups together with subject facilitators from the Gauteng Department of Education and a university postgraduate student or lecturer who acted as group leader. The first project activity, curriculum mapping, provided a professional development opportunity in which groups mapped mathematical assessment items to the assessment standards of the curriculum. The items were taken from three sources: the 2006 and 2007 International Competitions and Assessments for Schools tests and from ‘own tests’ developed by the groups in the last term of the project. Groups were required to analyse the knowledge base underlying test items and to reflect on what they teach in relation to what the curriculum intends them to teach. They used a protocol (mapping template) to record their responses. This article deals with the question of how to transform data collected from large-scale learner assessments into structured learning opportunities for teachers. The findings were that through the curriculum mapping activity, groups became more aware of what is intended by the curriculum and how this differs from what is enacted in their classes. The findings were also that the capacity of groups to align content was better when they worked with leaders and that with more experience they gained confidence in mapping test items against the curriculum and made better judgments in relation to curriculum alignment. Involving teachers in the interpretation of both public assessment data and data from their own classroom activities can build their own understanding of the knowledge base of test items and of the curriculum.

Introduction

As a policy lever for benchmarking standards and for monitoring performance, the South African Department of Basic Education has embarked on a number of national initiatives to collect learner assessment data. A variety of international and local large-scale systemic assessments have been conducted in the country. To date the data from these systemic assessments, the test items as well as the test results, have been used by mathematical and language experts, economists and statisticians at a systemic level and predominantly for benchmarking. Teachers have not participated in the production of this evidence nor has the opportunity for developing teachers’ interpretive skills of such data been taken up. The question is how to transform data collected from large-scale learner assessments into structured learning opportunities for teachers. This article deals with this question.

Merely having another set of data in the form of benchmarking, targets and progress reports that ‘name and shame’ schools leads to resentment and compliance but not to improvement of learning and teaching (Earl & Fullan, 2003; McNeil, 2000). In South Africa, Kanjee (2007) sums up the challenge:

For national assessment studies to be effectively and efficiently applied to improve the performance of all learners, the active participation of teachers and schools is essential. … Teachers need relevant and timeous information from national (as well as international) assessment studies, as well as support on how to use this information to improve learning and teaching practice. Thus a critical challenge would be to introduce appropriate policies and systems to disseminate information to teachers. For example, teacher-support materials could be developed using test items administered in national assessments. (p. 493)

It appears that in using Annual National Assessments the South African Department of Basic Education is aiming to provide teachers with timeous information from national assessments to guide planning and monitor progress (Department of Basic Education, 2010). What is not clear is how the department is planning to support teachers on how to use this information to improve learning and teaching practice. Very little attempt has been made to involve teachers in data interpretation and not enough emphasis has been placed on the potential value of the data available from these systemic evaluations for informing teaching and learning practices. International research has engaged with the question of how to use assessment data beyond benchmarking (Earl & Fullan, 2003; Earl & Katz, 2005; Katz, Earl & Ben Jaafar, 2009). In thinking about this question, Katz, Sutherland and Earl (2005) drew an important distinction between two very different kinds of practices in benchmarking: ‘accounting’, which is the practice of gathering and organising of data, and ‘accountability’, which refers to teacher-led educational conversations about what the data means and how it can inform teaching and learning. Katz et al.’s (2005) distinction is very important and is in line with Elmore’s (2002) and Hargreaves’s (2001) important arguments. Hargreaves (2001, p. 524) argues that the future of collegiality may best be addressed by (inter alia) taking professional discussion and dialogue out of the privacy of the classroom and basing it on visible public evidence and data of teachers’ performance and practices, such as shared samples of student work or public presentations of student performance data. Elmore (2002) claims that teachers can be held accountable for their performance only if they have a deep sense of the demands made upon them. Although this may seem obvious, the challenge lies in identifying what counts as making accountability standards explicit.

Literature on professional development programmes for teachers shows that piecemeal forms of intervention are not effective (Borko, 2004; Cohen & Ball, 1999; Earl & Katz, 2005; Elmore & Burney, 1997; Katz et al., 2009). A broad consensus seems to emerge around the following claims: firstly, that teachers require continuous interactive support over a substantial period of time. Secondly, that teacher learning should be focused on specific (and few in number) educational objects and guided by an expert who is acting as a critical friend. Thirdly, that within the current emphasis on accountability, professional conversations by teachers, in support networks (broadly referred to as ‘professional learning communities’), can provide teachers with a productive opportunity to cultivate a sense of ownership of what the data means, specifically in relation to their current practices.

The Data Informed Practice Improvement Project

Working with teachers on interpretation of learner assessment data was the central goal of the Data Informed Practice Improvement Project (DIPIP), Phase 1 and Phase 2 (Shalem, Sapire, Welch, Bialobrzeska & Hellman, 2011). The DIPIP project provided a context for professional conversations in which mathematics teachers, together with university academics, graduate students and department-based subject advisors, discussed assessment data. In these discussions, groups were dealing with information from the assessment data that could be used to think about reasons for learners’ errors, map the test items to the National Curriculum Statements (NCS), read and discuss academic texts about mathematical concepts (e.g. the equals sign) and learner errors related to these, develop lesson plans, and reflect on videotaped lessons of some teachers teaching from the lesson plans.

The positive outcomes of research done on the efficacy of professional learning communities served to inform the approach used in this project (Brodie & Shalem, 2011). The term ‘professional learning communities’ generally refers to structured professional groups, usually school-based, providing teachers with opportunities for processing the implications of new learning (Timperley & Alton-Lee, 2008). Commonly, professional learning communities are created in a school and consist of school staff members or a cross section of staff members from different schools in a specific area of specialisation. The groups in the DIPIP project were structured differently and included teachers and practitioners with different knowledge bases and role specialisations (see below). As professional learning communities, the groups worked together for a long period of time (weekly meetings during term time at the Wits Education campus for up to three years from 2007–2010), sharing ideas, learning from and exposing their practices to each other. In these close-knit communities, teachers worked collaboratively on curriculum mapping and error analysis, lesson and interview planning, test setting and reflection.

To provide the basis for a systematic analysis of learners’ errors, test items and learner achievement data of an international standardised multiple-choice test, the 2006 and 2007 International Competitions and Assessments for Schools (ICAS), was used.1 For the curriculum mapping activity the groups were tasked with investigating the alignment of the ICAS tests items (not learner achievement data) with the curriculum at the time, that is, the NCS for Mathematics (Department of Education, 2002). This article focuses on the nature and outcome of the curriculum mapping activity, presenting findings on how the curriculum mapping activity provided groups of practitioners an opportunity to engage with and reflect on the curriculum, and discussing in what ways and to what extent this activity succeeded.

Curriculum standards – teacher knowledge and interpretation
There are two main different forms of curriculum. The first is skills based and presents a collection of statements (outcomes and assessment standards). The second is content based and its form foregrounds the conceptual structure of the intellectual field from which it selects specific subject matter. Research in South Africa has shown that an outcomes-based curriculum provides weak signals to teachers about coverage, sequence and progression. Upon the findings of several national investigations, the NCS was replaced with a content-based curriculum (Department of Basic Education, 2011). There is hope that the provision of a curriculum that gives better signals on content and forms of learning will enable teachers to implement that curriculum more effectively. We argue that it is one thing to design a better curriculum, but it is a very different matter to achieve teachers’ understanding of what standard is required for the grade they are teaching and what content they should focus on in their teaching.

Teachers’ understanding of accountability demands, their consent and their readiness to accept change are interrelated processes (Shalem, 2003), but policymakers often assume that curriculum standards make policy requirements sufficiently clear to teachers. In practice, ‘reading’ the curriculum requires an application of teacher knowledge. Shulman (1986) refers to three categories of teacher knowledge, namely, pedagogic knowledge, content knowledge and pedagogic content knowledge. In order to properly interpret the curriculum, teachers are expected to draw on their subject matter knowledge and to contextualise curriculum standards within their learning environment, taking into account the needs of their learners. In terms of Shulman’s categories of knowledge, this means that teachers need to draw on both their content knowledge and pedagogic content knowledge in order to interpret and apply the curriculum. More specifically, Ball’s sixth domain of teacher knowledge, ‘horizon knowledge’, is useful here. It refers to teacher knowledge ‘of how mathematical topics are related over the span of mathematics included in the curriculum’ (Ball, Thames & Phelps, 2008, p. 403). Following the work by Ball et al., one can say that in order to set high expectations for their learners, in addition to their specialised mathematical knowledge which straddles six domains of mathematical knowledge for teaching, teachers need to understand the sequence and progression of the mathematics they teach. Teachers need to understand what the curriculum aims to achieve in an earlier grade and in what ways the topics they teach connect to the conceptual development of the same concept in a later grade. In this way, teachers could better understand the standards required of the curriculum.

International empirical research shows that curriculum statements about assessment standards, together with results of various standardised assessments, do not, in themselves, make standards clear (Darling-Hammond, 2004; Katz et al., 2009). Empirical research in South Africa has identified misalignment between the demands of the curriculum, teaching and assessment (Reeves & Muller, 2005). Classroom research suggests that many teachers simply ignore important aspects of the NCS and continue to teach poorly what they taught before (Brodie, Jina & Modau, 2009; Chisholm et al., 2000; Fleisch, 2007; Jansen, 1999). There are a variety of reasons for this overarching finding. Research in South Africa gives primacy to two inter-related explanations: poor teacher knowledge, in particular subject matter knowledge, and poor signalling of the (NCS) curriculum (Taylor, Muller, & Vinjevold, 2003). The argument is that the outcomes-based curriculum provided teachers with very weak signals as to what content should be made available to learners and how this should be done. Many teachers in South Africa lack strong content knowledge to ‘design down’2 tasks, activities and assessments from the outcomes specified in the curriculum (South African Qualifications Authority, 2005). NCS curriculum standards were generally weak, both in content and progression, and therefore provided a weak guide for teachers (Muller, 2006; Reeves & Muller, 2005; Shalem, 2010). Reeves and McAuliffe (2012) found in their study of ‘topic sequence’ and ‘content area spread’ in mathematics lessons that ‘for most learners mathematics was not presented in a coherent and composite manner over the school year’ (p. 28). When the curriculum specifies content as a list of topics, it does not elaborate sufficiently on the topics conceptually, and does not relate the topics to one another adequately. Furthermore, lists of skills of what learners must do without any specific content attached to these skills allow for too much variation (low reliability) in the types of textbooks that are produced, in the criteria used by schools to select textbooks, in teachers’ professional judgement of what counts as an achievement, in the kind of tasks teachers design and in ‘curriculum coverage’. The main point here is that ‘lists of statements’ do not necessarily show what concepts are key to a field, what activities are worthwhile and what texts are worthwhile (Shalem, 2010, p. 91). Taken together, these explanations suggest that teachers struggle to interpret the curriculum (Brodie, Shalem, Sapire & Manson, 2010).

In this article we add a third explanation. We propose that even if standards are sequenced and well explicated by examples, they do not disclose to teachers what instructional practice should look like or what constitutes acceptable coverage and cognitive demand of curriculum content. Curriculum standards intend to transmit criteria to teachers of what, when and how to teach mathematical content, but transmission this through telling teachers is not enough. Teachers need to be involved, we argue, in a practice that will require them to use the curriculum standards so that they understand what they mean and how they are related to their existing practice. Criteria, says Cavell (1979), are embedded in practice. We ‘find’ them in the way we do and say things. It is in the way we speak or in the way we do things that we make relevant connections, and thereby show that we understand the way a concept is related to other concepts, or its criteria (Shalem & Slonimsky, 1999). Put differently, by doing knowledge-based professional work, teachers, we argue, are given an ‘epistemological access’ (Morrow, 1994) to the form in which the curriculum is designed, and more specifically to the content that it privileges. According to Ford and Forman (2006), this kind of professional development work requires a relational framework between three fundamental constitutive disciplinary resources: disciplinary material (working with ‘the material aspects’ of a specific intellectual field or with a set of propositional knowledge limited to the field), collectivity (using the norms of the intellectual field to produce proofs and grounds for judgement), and disciplinary procedure (following procedures to evaluate claims made about the natural or the social world) (p. 4). Taking part in curriculum mapping activities, we argue, provides such an opportunity for teachers’ professional development.

Curriculum mapping
Curriculum literature distinguishes between the intended, enacted and examined curriculum (e.g. Stenhouse, 1975). In broad terms, this distinction refers to the differences and connections between what the official curriculum document intends, including the academic literature teachers use to decide what to emphasise when they teach a mathematical concept in a specific grade (the intended curriculum), what teachers do in their classrooms (the enacted curriculum) and what is assessed in order to determine achievement and progress (the examined curriculum). ‘Curriculum alignment’, the idea that informs ‘curriculum mapping’, describes what counts as a productive educational environment. Biggs’s (2003) premise is that when a teacher covers the content of an ‘intended curriculum’ at the appropriate cognitive level of demand and their learners perform well on high quality tests (the examined curriculum), they have created a productive learning environment (the enacted curriculum), aligned to the demands intended by the curriculum. The corollary of this is that if the quality of a learning environment is judged from the high results of the learners, all things being equal, it can be said that the results of the learners demonstrate that they have studied key content of the subject (curriculum coverage) and that they are able to use the content to answer a range of questions (cognitive level of demand). This is an important insight for understanding the role of curriculum knowledge in teacher practice and the significance of having experience in curriculum alignment.

Teachers can become more familiar with the requirements of the curriculum and in this way improve the conditions for achieving curriculum alignment (Burns, 2001; Jacobs, 1997) through their involvement in curriculum mapping activities. Curriculum mapping is defined as a ‘tool for establishing congruence between what is taught in the classroom and what is expected in state or national standards and assessments’ (Burns, 2001). The idea was formulated by English in the 1980s (in Burns) and in its common form includes a type of calendar on which teachers, in grade-level groups, record time-on-task in each of the topics they teach and the order in which they teach the topics. Over time this practice developed to include teachers’ records of the ways they taught and assessed the topic. In this common form, curriculum mapping is focused on the relation between the enacted curriculum and the intended curriculum.

Accounting – an opportunity to learn from the examined curriculum
It is current practice by policymakers to strengthen accountability by using large-scale assessments (the examined curriculum). The policy idea here is that providing teachers with a range of assessments at different levels of cognitive demand in relation to key subject matter content, education departments hope to use the examined curriculum to make curriculum standards explicit. To be considered rigorous, of high quality and valid, large-scale assessments need to be shown to be aligned to the curriculum (Brookhart, 2009; Case, Jorgensen & Zucker, 2008; McGehee & Griffith, 2001). However, and this is the argument of this article, if teachers do not have opportunities to participate in analysing the content of these learner assessments, more specifically to profile the test items or to examine the curriculum standards that they articulate, their mathematical content and its alignment with the curriculum standards of the grade they are teaching, and which mathematical concepts or skills are needed in order to find the solution to a test item, teachers will not be able to fully gauge the requirements of the intended curriculum. We believe that an opportunity for teachers to learn is missed here.

By working with test items (the examined curriculum) and thinking about the links between the content present in the test items in relation to the content present in the curriculum standards (the intended curriculum), the teachers in the DIPIP project were doing a different and more unusual form of curriculum mapping. The use of test items as an artefact to focus teachers’ thinking when they interpret the curriculum addresses the main challenge faced when interpreting any curriculum document, that is, to identify ‘what’ has to be covered as well as ‘the level’ at which the selected content needs to be taught. In curriculum terms this refers to curriculum coverage in a specific intellectual field at levels of cognitive demand appropriate for specific grades. By structuring professional conversations around curriculum mapping of test items, the mapping activity intended to provide the teachers with a relational framework, one in which they enact dimensions of expertise that are commonly excluded from them (curriculum mapping). Through this we hoped to enable the teachers to gain a deeper and more meaningful understanding of the curriculum (the NCS), which, as we have shown above, is a skills-based curriculum that is, in best case scenario, opaque, especially in the case of teachers with weak subject matter knowledge. It is to the design of the activity that we turn next.

The project process
There were two rounds of the mapping activity in our project: the first in February − May 2008 (Round 1) and the second in August − mid-September 2010 (Round 2). About 50 mathematics teachers from Grade 3 to 9 worked in groups. The initial selection of teachers for participation in the project was guided by the Gauteng Department of Education. In particular, teachers from ‘better performing schools’ that had participated in the ICAS tests were selected. However, as a few teachers dropped out of the project, they were replaced by other mathematics teachers selected from schools with easy access to the Wits Education campus (Shalem et al., 2011). The group membership was highly stable and over the three year period, a total of 62 teachers participated. The teachers were divided into 14 groups, two groups for each grade. Each group consisted of 3–4 teachers, a subject facilitator from the Gauteng Department of Education and a graduate student or a university staff member as a group leader. Two points must be emphasised here. Firstly, the group leaders were selected for their mathematics classroom experience or alternatively for their involvement in initial teacher education and in-service teacher development. At different points of time during the duration of the project, before the introduction of a new activity, the leaders were trained by a mathematical education expert. Their role was very important in the project, which is borne out by the findings (see below). Secondly, since all the activities were conducted in groups, reporting on DIPIP activities relates to the groups and not to individual teachers. Although results about individual teachers’ mapping would be more desirable, from a research perspective, the idea of professional learning communities and this methodological criterion are in conflict. Notwithstanding, the results reported in this article are statements arrived at through group discussion in relation to specific activities and reflect the consensual decision made by the group.

The ICAS tests were not designed especially for the South African curriculum. These tests, which are used in many countries in the world, were used in South Africa in good faith that they represented an ‘international’ mathematics curriculum. The mapping activity that the groups were given to complete was not done on this test by any expert or department official. The Gauteng Department of Education treated the test as generally valid for the mathematical content of the grades tested. Our curriculum mapping confirmed this assumption. Table 1 shows our analysis of the content coverage in the ICAS 2006.3 The table lists the number of items per curriculum content area for each of the grades studied in the project (consistent with the NCS topic weighting).

The curriculum mapping activity
In Round 1, all 14 groups mapped the 2006 ICAS test items. The groups met once a week for about two hours for 14 weeks, working with their group leaders. Groups were expected to map a minimum of 20 items in Round 1. In Round 2, 11 of the groups mapped ICAS 2007 items or items from tests that they had set themselves (hereafter referred to as ‘own tests’). Groups worked without group leaders in Round 2. This was done in order to see the extent to which they could manage the task on their own.

A modified curriculum document was prepared for use by the groups (Scheiber, 2005). The tabulated curriculum enabled the groups to navigate and refer to the NCS document more easily, to look at and compare the content and contexts across the different grades. The tabulated curriculum has a landscape page setup and matches the assessment standards for each grade, across the page, using numbers. This makes it easy to compare the assessment standards across grades and to see at a glance how concepts are built up in each grade. Figure 1 gives an illustrative example of the mental arithmetic assessment standard strand from Grade 1 to 5. In the full tabulated document, assessment standard strands from Grade 1 to 9 are given across the page.

The groups were given a template (see Figure 2), which structured their conversations and guided the process by which they arrived at a consensus, which was recorded in the template as the ‘group response’. The template was given to the groups in order to focus the conversation around what the ICAS test assessment data (the examined curriculum) means, how it aligns with the conceptual demands of the NCS (the intended curriculum), and how it fits with teachers’ professional knowledge and experience (the enacted curriculum) (Brodie et al., 2010).

For each test item the groups needed to:

• identify the mathematical concept or concepts being tested by the ICAS item
• find the relevant assessment standards relating to the concepts
• justify the choice of the assessment standard
• state when or if the content is taught and whether it is taught directly or indirectly.4

TABLE 1: Number of items per curriculum content area for each grade in the ICAS 2006 test.

FIGURE 1: Exemplar assessment strands (mental arithmetic, Grade 1–5) in tabulated format.

The template required the groups to think about and decide which mathematical concepts or skills are needed in order to find the solution to a test item. The group’s decision was to be based on which Assessment Standard(s) its members linked the test items to. The groups needed to be sure that their selection was appropriate, and to do this they were asked to give strong motivation for their decision. This selection was not confined to the NCS of the grade they were analysing, but was related to several grades. This was possible as the group worked with all the assessment standards across Grade R–9. The last section of the activity gave the groups an opportunity to examine the alignment between the intended and the enacted curriculum, comparing the content coverage assumptions made by the test designers and what, in fact, they cover in the classroom. In the last column of the template (Figure 2), the groups needed to report on their teaching practices. This allowed the gap and the congruence between the intended and the enacted curriculum (based on groups’ reporting) to be made explicit to the teachers in their groups. In this way, the ICAS test was used as an artefact (a concrete textual item), which mediated between the intended curriculum and the groups’ professional knowledge and experience of the enacted curriculum. The upshot of this is that by being involved in professional work (the work that normally mathematics and curriculum experts do when they align the examined with the intended curriculum), teachers, in their respective groups, reported that they came to understand the demands of the NCS curriculum for the first time (Brodie et al., 2010).

FIGURE 2: Curriculum mapping activity template.

Sample of items
Of the 402 ICAS items that groups mapped in Round 1, 140 items were selected for analysis. From the first half of each grade level test 10 items were chosen and 10 items were chosen from the second half of the test. This gave 20 items per grade. The selected items included all the content areas tested. All 82 of the items mapped by the groups in Round 2 were selected for analysis.

Data analysis
A mathematics education expert was employed to map the sample of the ICAS test items. The expert’s mapping was validated by a project manager and based on this agreed mapping, groups were considered to have ‘misaligned’ items with the curriculum standards when their mapping was different from the validated mapping. Coding of the alignment was recorded in spreadsheets. Coding of the remaining data involved recording (using spreadsheets) groups’ comments on content taught ‘directly’, ‘indirectly’ or ‘not at all’. Coded data was analysed quantitatively, finding observable trends and relationships evident in the sample. Examples of groups’ explanations from the template were recorded to exemplify quantitative findings. We refer to some of these examples in the findings. For the purposes of reporting on the analysis, we combined the following sets of groups: Grade 3–6 and Grade 7–9.

Validity and reliability
A protocol (the mapping template shown in Figure 2) was used for the recording of the groups’ responses in the curriculum mapping activity. The protocol was discussed amongst colleagues in the project management team. Findings were reported on at local and international conferences where these could be discussed to enhance quality. The following are points to be noted as a possible validity threat:

• Since only one group of Grade 7–9 mapped items in Round 2, comparisons cannot be made for these grades with Round 1 mapping.

• One of the groups (Grade 7) attained full matching with the expert. This may have skewed the data.

• Round 2 was a slightly abridged version of the curriculum mapping, due to time constraints, in which the teachers only mapped curriculum content and did not report on when and how they taught this content.

Ethical considerations
Approval for this study was granted by the Department of Education and at an institutional level by the university ethics committee. Informed consent was obtained from all of the teachers, university staff and students who participated in the professional development project meetings. Confidentiality and anonymity of participants was maintained through the use of classified group names (e.g. Grade 3 Group A, which is denoted as G3gA).

Findings

In total, in Round 1, the Grade 3–6 group mapped 246 and the Grade 7–9 group mapped 156 ICAS 2006 items. In Round 2, five of the 11 groups mapped their own tests. Altogether these five groups mapped 27 items. The other six groups mapped 55 ICAS 2007 items. In total, in Round 2, the Grade 3–6 group mapped 57 items and the Grade 7–9 groups mapped 25 items. In sum, 402 items were mapped in Round 1 and 82 items were mapped in Round 2. The group responses recorded on the mapping templates formed the data set on which the analysis presented in this article is based.

We first present the overall results of the curriculum mapping, in which we look at the alignment quality of the groups’ mapping. We then discuss the groups’ reporting on content ‘taught’ (directly and indirectly) and content ‘not taught’, which yields insight into the relationship between the intended and the enacted curriculum.

Overall alignment results
Two overall results can be noted. Firstly, the overall agreement between the experts’ (indicated as ‘alignment’) and the groups’ mapping was relatively high: 83% in Round 1 and 67% in Round 2. These percentages represent the average of the groups’ correctly aligned assessment standards to test items compared to the experts’ agreed curriculum alignment of the test items. This is an indication that the curriculum mapping was generally successful, particularly in Round 1. The mean misalignment was 26% higher in Round 2. It is important to remember that in

Round 1 groups worked with group leaders whilst in Round 2 they worked without group leaders, which may have contributed to the increased misalignment in the Round 1. Secondly, the mapping alignment of Grade 7–9 groups was found to be stronger than Grade 3–6 groups. It stayed at the same level of accuracy (around 80%) in both rounds.

Increased confidence and improved judgement
We investigated what specifically in Round 1 may have contributed to differences between groups’ strength in mapping in Round 2. We compared the means of the groups who had mapped different numbers of items in the two rounds. We found that groups that had mapped more items in Round 1 achieved higher mean alignment in Round 2 than groups that had mapped fewer items. Put differently, groups that gained more experience of mapping in Round 1, as measured by their mapping more items, consistently showed higher alignment percentages in Round 2 (see Figure 3). For this comparison we could only use data from the Grade 3 to 6 group and we compared each paired grade group individually. Since only one group for each of the grades in Grade 7–9 did the curriculum mapping activity in Round 2 (see the discussion on validity), comparisons were not possible here.

This finding suggests that with more experience groups gain confidence in mapping test items against the curriculum and make better judgments in relation to curriculum alignment.

Mapping ICAS items and own test items
Higher misalignment was found in Round 2 of ‘own test’ items than of ICAS 2007 test items in the Grade 3–6 group. This finding is counter-intuitive: one would have thought that groups would be more familiar with content in a test that they had drawn up themselves. This might suggest that the groups designed tests that included mathematical content about which they were not entirely confident. Alternatively, it could be that the ‘own test’ items required a different way of thinking when aligning to the assessment standards of the curriculum, so the groups’ familiarity with the task made it easier in Round 2 to align ICAS test items but not ‘own test’ items. A third explanation may be that when groups designed their own tests, because they were expected to select a misconception and to design the test items around it, they may have focused their attention on the misconception rather than on the level of content required. It seems they had difficulty embedding the concepts at the appropriate level.

FIGURE 3: Comparison between Round 1 and Round 2 in terms of mean alignment for grade groups that mapped more items or mapped fewer items.

Taking all these findings together, we suggest that the curriculum mapping activity gave the practitioners (we are especially interested in the experience of the teachers in the group) an opportunity to understand the selection of the mathematical content for the ICAS tests, per grade, as well as its level of cognitive demand. Since groups were working with curriculum standards ranging across several grades, teachers were given an opportunity to analyse what assessment standards (and related mathematical conceptual knowledge) in the South African curriculum learners would need to have achieved in order to answer a test item correctly. Analysing the items conceptually, which groups needed to do in order to identify the mathematical concepts being tested by the ICAS item, alerts the groups to conceptual progression.

Gaps between the intended and the enacted curriculum
The data about the intended curriculum and the groups’ reflections on practice are drawn from the groups’ reporting in Round 1, specifically from the information the group included in the last column of the curriculum mapping template (see Figure 2). The findings for this section refer only to the mapping activity of the ICAS 2006 test items (see the discussion on validity). Teachers identified the mathematical content in an ICAS 2006 test item and after aligning it to the NCS they needed to consider it in relation to their own teaching. This served to make explicit to the teachers the difference between the intended and the enacted curriculum. The highest percentage (46%) of the ICAS test related content was reported to be taught ‘directly’. Next highest (26%) of the ICAS test related content was reported as ‘not taught at all’. The lowest percentage (24%) was reported as being taught indirectly.5

Quotes taken from grade groups’ recorded responses are given as examples of teachers’ reporting on:

• Direct teaching: ‘When we teach bonds’ (G4gA); ‘Taught in term 1, as mental calculations’ (G5gA); ‘Decimal fractions are taught in the first term of Grade 7’ (G7gA).

• Indirect/linked teaching: ‘Should be covered in all problem-solving activities across all grades’ (G6gA, G8gB); ‘Place value is taught in the first term. Link it with money, mass and capacity’ (G5gB); ‘Should be done specifically as the concept of symmetry but it can be done via the theme e.g. Special me (body parts-left and right sides of the body)’ (G3gB).

• Content not taught at all: ‘Number patterns – Grade 1 onwards. Not focused on in Grade 8. Knowledge is assumed’ (G8gB); ‘The concept of odd and even numbers – Assume that it has been taught in earlier grades’ (G5gB); ‘… pictorial representation of patterns is a neglected area, thus making it difficult for them to conceptualise what is required’ (G3gB); ‘Rotation is not taught in Grade 4, but is in the curriculum for Grade 5’ (G4gA, G4gB).

FIGURE 4: Grouped grades’ reporting on ‘when I teach it’ (Round 1).

FIGURE 5: Grouped grades’ reasons given for content not taught.

TABLE 2: Summary of mathematical content reported ‘not taught’.

It is interesting to note (see Figure 4) that the Grade 3–6 group identified more content in the tests which they said they did not teach because it was at a higher level than they were expected to teach (topics included irregular shapes, rotational symmetry, tessellations, reflections, rotations and probability). The opposite was true for the Grade 7–9 group.

We further investigated the reasons given by the groups for content ‘not taught’ (see Figure 5) in their completed mapping templates. Our investigation gave rise to two categories of mapping referred to as ‘mapping downwards’ and ‘mapping upwards’. Mathematical content of an item that according to the curriculum should be covered in a lower grade than in the test was classified as ‘mapped downwards’ – reported as: ‘This [different perspectives of geometric solids] is not specifically covered in Grade 8, it is formally required to be taught first in Grade 6’ (G8gB). Or, pointing to the problem-based nature of an item, groups reported: ‘Although the number range falls within the scope of the Phase, we do not teach this, because the way in which the problem is presented is beyond the scope of the Phase’ (G3gA). Mathematical content of an item that according to the curriculum should be covered in a higher grade than in the test was classified as ‘mapped upwards – reported as: ‘We would use this task [modelling involving area] as an extension for the stronger learners’ (G8gA). Most commonly, explanations for ‘not teaching’ fell into one of these categories.

It is also interesting to note that more content in the primary school (Grade 3–6) than in the secondary school (Grade 7–9) was classified as ‘not taught’ for reasons of being at a higher than expected level for the grade (according to the intended curriculum identified in the NCS).

Table 2 shows that most items reported as ‘not taught’ were at the grade level of the test, according to the intended curriculum. Few of the items reported as ‘not taught’ were mapped up or down, relative to the grade level according to the intended curriculum.

In the Grade 3–6 group, 44% of the content reported as ‘not taught’ was at the expected grade level. In the Grade 7–9 group, 76% of the content reported as ‘not taught’ was at the expected grade level. In both grade groups, the content reported ‘not taught’ at own grade level included mathematical content across all five of the NCS content areas. The Grade 3–6 group mapped up some data, measurement and geometry items and they mapped down some number and data items. The Grade 7–9 group did not map down any items but they mapped up some number, geometry and measurement items.

Some of the specific neglected areas reported by the teachers include (quotes are taken from grade groups’ recorded responses):

• ICAS items that included irregular shapes, rotational symmetry, tessellations, reflections, rotations and probability were reported as beyond the scope of Grade 3 (G3gA, G3gB).

• The pictorial representation of a pattern was reported as not taught at the Grade 3 level. Patterns are usually taught as a horizontal sequence of numbers: ‘… pictorial representation of patterns is a neglected area, thus making it difficult for them to conceptualise what is required’ (G3gB).

• Reasoning logic was reported as not taught in Grade 8: ‘Never! Not mathematics’ (G8gA).

• Finding fractional parts of whole numbers was reported as not taught using learning aids in Grade 5: ‘We never use manipulatives to determine fractional parts of whole numbers’ (G5gA).

Our data analysis shows a direct relationship between the teachers’ perception of the enacted curriculum – content taught ‘directly’ – and the degree of success in aligning the international test item to the curriculum. Figure 6 shows higher percentages of misalignment of item content reported as ‘not taught’ (compared to content reported as ‘ taught’) for both groups, but particularly in the lower grades. Content reported as ‘taught directly’ is, for the best part, aligned better. This difference in alignment could be an indication of teacher content knowledge. It could be that in the higher grades teachers are teaching more work with which they are not sufficiently familiar, yet they do teach it because it is required of them. It is also possible that the teachers in the lower grades may have reported more openly on content ‘not taught’. The teachers in the higher grades may leave out content with which they are not familiar but they do not report on it. Alternatively, the items may include content that the teachers expect should be taught earlier and thus do not report on it.

In conclusion, by allowing them to analyse if and when the content specifications of the intended curriculum are covered in practice, the mapping activity gave the teachers an opportunity to reflect on their practices. We argue that the mapping activity enabled them to develop their pedagogical content knowledge (Shulman, 1986). By examining curriculum coverage, cognitive demands and alignment, the teachers were involved in ascertaining the match between an international test, the South African curriculum and their professional knowledge and experience. This should have, at least to some extent, developed their understanding of the sequence and progression of the mathematics they teach and hence their grasp of Ball’s sixth mathematical knowledge domain (Ball et al., 2008).

FIGURE 6: Misalignment of content taught and content not taught.

Conclusion

The findings of this research suggest that through the curriculum mapping activity, teachers became more aware of what is intended by the curriculum, in particular of the discrepancy between what they understand is intended by the curriculum and report is enacted in their classes. In relation to the intended curriculum, the teachers were able to report on when they teach and do not teach content and on ‘neglected content area in schools’, which shows that such activities can build teachers’ awareness of the presence or absence of curriculum content in their classes. The findings also suggest that teachers’ curriculum mapping ability is stronger when they are more familiar with and hence have greater confidence in doing the activity. When content was reported as ‘not taught’, a higher level of misalignment was generally seen, which indicates familiarity with mathematical content does affect the quality of curriculum mapping. The differences between alignment in Round 1 and Round 2 indicate that teachers’ capacity to align content was better when they worked with well-selected and trained group leaders.

This research supports the claim that teachers can benefit a great deal from being involved in interpreting large-scale assessment tests. Curriculum mapping, using the format of a structured interface, creates a ‘defensible focus’ (Katz et al., 2009) for this kind of professional development. The analysis above shows that the structured process of interface enabled teachers to actively engage in curriculum translation of the disciplinary material embedded in curriculum statements. Working in groups, the teachers learned about the form of the curriculum by using an artefact (the test items) to engage the curriculum content and standards. Involving teachers in the interpretation of both public assessment data and data from their own classroom activities can build their understanding of the knowledge base of test items and of the curriculum.

Acknowledgements

We acknowledge the funding received from the Gauteng Department of Education and in particular would like to thank Reena Rampersad and Prem Govender for their support of the project. We would also like to acknowledge the pivotal role played by Karin Brodie (project leader, Phase 1 and Phase 2) in the conceptualisation of the project. The views expressed in this article are those of the authors.

Competing interests
The authors declare that we have no financial or personal relationship(s) which might have inappropriately influenced our writing of this article.

Authors’ contributions
Y.S. (University of the Witwatersrand) was the project director, developed the theoretical framework for the article and contributed to the analytical part of the article. I.S. (University of the Witwatersrand) was a project coordinator, was responsible for the data analysis and contributed to the theoretical and analytical parts of the article. B.H. (St John’s College), a mathematics education expert, was a group leader on the project and was involved in the data coding; she also contributed to the writing and development of the article.

References

Ball, D.L., Thames, M.H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 9(5), 389–407. http://dx.doi.org/10.1177/0022487108324554

Biggs, J.B. (2003). Teaching for quality learning at university. (2nd edn.). Buckingham: Open University Press/Society for Research into Higher Education.

Borko, H. (2004). Professional development and teacher learning: Mapping the domain. Educational Researcher, 33(8), 3–15. http://dx.doi.org/10.3102/0013189X033008003

Brodie, K., Jina, Z., & Modau, S. (2009). Implementing the new curriculum in Grade 10: A case-study. African Journal for Research in Mathematics, Science and Technology Education, 13(1), 19–32.

Brodie, K., & Shalem, Y. (2011). Accountability conversations: Mathematics teachers’ learning through challenge and solidarity. Journal of Mathematics Teacher Education, 14(6), 419–439. http://dx.doi.org/10.1007/s10857-011-9178-8

Brodie, K., Shalem, Y., Sapire, I., & Manson, L. (2010). Conversations with the mathematics curriculum: Testing and teacher development. In V. Mudaly (Ed.), Proceedings of the 18th annual meeting of the Southern African Association for Research in Mathematics, Science and Technology Education, Vol. 1 (pp. 182–191). Durban: SAARMSTE. Available from http://www.sdu.uct.ac.za/usr/sdu/downloads/conferences/saar_mste2010/longpapervol1.pdf

Brookhart, S.M. (2009). Editorial: Special issue on the validity of formative and interim assessment. Educational Measurement: Issues and Practice, 28(3), 1–4. http://dx.doi.org/10.1111/j.1745-3992.2009.00148.x

Burns, R.C. (2001). Curriculum mapping. Association for Supervision and Curriculum Development. Available from http://www.ascd.org/publications/curriculum-handbook/421.aspx

Case, B.J., Jorgensen, M.A., & Zucker, S. (2008). Assessment report: Alignment in educational assessment. San Antonio, TX: Pearson Education, Inc.

Cavell, S. (1979). The claim of reason: Wittgenstein, scepticism, morality, and tragedy. Oxford: Clarendon Press.

Chisholm, L., Volmink, J., Ndhlovu, T., Potenza, E., Mahomed, H., Muller, J., et al. (2000). A South African curriculum for the twenty first century. Report of the review committee on Curriculum 2005. Pretoria: DOE. Available from http://www.education.gov.za/LinkClick.aspx?fileticket=Y%2bNXTtMZkOg%3d&tabid=358&mid=1301

Cohen, D.K., & Ball, D.L. (1999). Instruction, capacity and improvement. The Consortium for Policy Research in Education Research Report Series RR-43. Available from http://www.cpre.org/instruction-capacity-and-improvement

Darling-Hammond, L. (2004). Standards, accountability and school reform. Teachers College Record, 106(6), 1047–1085. http://dx.doi.org/10.1111/j.1467-9620.2004.00372.x

Department of Basic Education. (2010). Annual national assessments 2011. A guideline for the interpretation and use of ANA results. Pretoria: DBE. Available from http://www.education.gov.za/LinkClick.aspx?fileticket=Ko7bNU8o2fw%3d&tabid=358&mid=1325

Department of Basic Education. (2011). Curriculum and assessment policy statement. Pretoria: DBE.

Department of Education. (2002). Revised national curriculum statement. Grades R to 9 (Schools) Policy. Pretoria: DOE.

Earl, L., & Fullan, M. (2003). Using data in leadership for learning. Cambridge Journal of Education, 33(3), 383–394. http://dx.doi.org/10.1080/0305764032000122023

Earl, L., & Katz, S. (2005). Learning from networked learning communities. Phase 2 – key features and inevitable tensions. Toronto: National College for School Leadership. Available from http://networkedlearning.ncsl.org.uk/collections/network-research-series/reports/nlg-external
-evaluation-phase-2-report.pdf

Elmore, R.E. (2002). Unwarranted intrusion. Education Next, 2(1). Available from http://educationnext.org/unwarranted-intrusion/

Elmore, R., & Burney, D. (1997). Investing in teacher learning: Staff development and instructional improvement in Community School District #2, New York City. Available from http://ideas.repec.org/p/idb/brikps/9043.html

Fleisch, B. (2007). Primary education in crisis: Why South African school children underachieve in reading and mathematics. Cape Town: Juta.

Ford, J.M., & Forman, E.A. (2006). Redefining disciplinary learning in classroom contexts. Review of Research in Education, 30, 1–32. http://dx.doi.org/10.3102/0091732X030001001

Hargreaves, A. (2001). The emotional geographies of teachers’ relations with colleagues. International Journal of Educational Research, 35, 503–527. http://dx.doi.org/10.1016/S0883-0355(02)00006-X

Jacobs, H.H. (1997). Mapping the big picture: Integrating curriculum and assessment K–12. Alexandria, VA: Association for Supervision and Curriculum Development.

Jansen, J. (1999). A very noisy OBE: The implementation of OBE in Grade 1 classrooms. In J. Jansen, & P. Christie (Eds.), Changing curriculum: Studies of outcomes based education in South Africa (pp. 203–217). Cape Town: Juta. PMid:10677716

Kanjee, A. (2007). Improving learner achievement in schools: Applications of national assessment in South Africa. In S. Buhlungu, J. Daniel, R. Southall, & J. Lutchman (Eds.), State of the Nation (pp. 470–499). Cape Town: HSRC Press. Available from http://www.hsrcpress.ac.za/product.php?productid=2183

Katz, S., Earl, L., & Ben Jaafar, S. (2009). Building and connecting learning communities: The power of networks for school improvement. Thousand Oaks, CA: Corwin.

Katz, S., Sutherland, S., & Earl, L. (2005). Towards an evaluation habit of mind: Mapping the journey. Teachers College Record, 107(10), 2326–2350. http://dx.doi.org/10.1111/j.1467-9620.2005.00594.x

McGehee, J.J., & Griffith, L.K. (2001). Large-scale assessments combined with curriculum alignment: Agents of change. Theory into Practice, 40(2), 137–144. http://dx.doi.org/10.1207/s15430421tip4002_8

McNeil, L. (2000). Contradictions of school reform: Educational costs of standardized testing. New York, NY: Routledge.

Morrow, W. (1994). Entitlement and achievement in education. Studies in Philosophy and Education, 13(1), 33–47. http://dx.doi.org/10.1007/BF01074084

Muller, J. (2006). Differentiation and progression in the curriculum. In M. Young, & J. Gamble (Eds.), Knowledge, curriculum and qualifications for South African further education (pp. 66–86). Cape Town: HSRC Press. Available from http://www.hsrcpress.ac.za/product.php?productid=2152

Reeves, C., & McAuliffe, S. (2012). Is curricular incoherence slowing down the pace of school mathematics in South Africa? A methodology for assessing coherence in the implemented curriculum and some implications for teacher education. Journal of Education, 52, 9–36. Available from http://joe.ukzn.ac.za/Libraries/No_53_2012/Is_curricular_incoherence_slowing_down_the_pace_of_ school_mathematics_in_South_Africa_A_methodology_for_assessing_coherence_in_the_implemented_curriculum
_and_some_implications_for_teacher_education.sflb.ashx

Reeves, C., & Muller, J. (2005). Picking up the pace: Variation in the structure and organisation of learning school mathematics. Journal of Education, 23, 103–130.

Scheiber, J. (2005). Reworked version of the Revised National Curriculum Statement. Johannesburg: Centre for Research and Development in Mathematics, Science and Technology Education.

Shalem, Y. (2003). Do we have a theory of change? Calling change models to account. Perspectives in Education, 21(1), 29–49.

Shalem, Y. (2010). How does the form of curriculum affect systematic learning? In Y. Shalem, & S. Pendlebury (Eds.), Retrieving teaching: Critical issues in curriculum, pedagogy and learning (pp. 87–100). Cape Town: Juta.

Shalem, Y., Sapire, I., Welch, T., Bialobrzeska, M., & Hellman, L. (2011). Professional learning communities for teacher development: The collaborative enquiry process in the Data Informed Practice Improvement Project. Johannesburg: Saide. Available from http://www.oerafrica.org/teachered/TeacherEducationOERResources/
SearchResults/tabid/934/mctl/Details/id/38939/Default.aspx

Shalem, Y., & Slonimsky, L. (1999). Can we close the gap? Criteria and obligation in teacher education. Journal of Education, 24, 5–30.

Shulman, L. (1986). Those who understand knowledge growth in teaching. Educational Researcher, 15(2), 4–14. http://dx.doi.org/10.3102/0013189X015002004

South African Qualifications Authority. (2005). Developing learning programmes for NQF-registered qualifications and unit standards. Pretoria: SAQA.

Stenhouse, L. (1975). An introduction to curriculum research and development. London: Heinemann. PMCid:1672203

Taylor, N., Muller, J., & Vinjevold, P. (2003). Getting schools working: Research and systemic school reform in South Africa. Cape Town: Maskew Miller Longman.

Timperley, H., & Alton-Lee, A. (2008). Reframing teacher professional learning. An alternative policy approach to strengthening valued outcomes for diverse learners. Review of Research in Education, 32(1), 328–369. http://dx.doi.org/10.3102/0091732X07308968

Footnotes

1.The ICAS test is designed and conducted by Educational Assessment Australia (EAA). In the Gauteng province of South Africa, 55 000 learners across Grade 3–11 in both private and public schools (3000 in total) wrote the ICAS tests in 2006, 2007 and 2008.

2.‘Design down’ was one of the imperatives of the NCS. This meant that teachers had to start with curriculum specifications and design lesson plans through which these specifications would be delivered in their classes.

3.Coverage is designed in a similar way in the ICAS 2007 tests.

4.‘Indirectly’ means through an assignment, a project or homework task, or linked to another content area.

5.It should be noted that the Grade 3–6 group reported certain items taught both directly and linked (for example) so the total of the ‘when I teach it’ percentages for this group goes slightly over 100%. This was not the case in Grade 7–9.