Holistic Assessment and Problem Based Learning
Author & presenter: Glen O’Grady
Type of Presentation: Oral
Theme: Assessment
Student assessment (assignments, examinations, presentations, portfolios etc) are widely accepted as an
important part of the learning process. This paper explores how assessment underpins the quality of
learning in problem based learning. It also reports the effects certain types of assessment have upon the
quality of problem based learning. The paper concludes that assessment in PBL needs to be holistic in
nature, meaning the raft of assessment tasks should both foster and measure a wide array of skills such that
the rich nature of learning promised by PBL is achieved.
Glen O'Grady
Deputy Director
Centre for Educational Development
Republic Polytechnic
Tanglin Campus
1 Kay Siang Road, Singapore 248922
DID : (65) 6376-8151
Fax : (65) 6415-1310
www.rp.edu.sg
Holistic Assessment and Problem Based Learning
Glen O’Grady
5th Asia -Pacific Conference on PBL : 16 - 17 March 2004
Introduction
In this paper I examine the role of student assessment in learning, in particular problem
based learning (PBL). I will concentrate on the summative and formative functions of
assessment in respect to learning. However to talk about assessment is to discuss “several
things at once” (Ramdsen 1992). Assessment encompasses measuring, defining and
summarizing what students can do and/or inferring what students could do. Assessment
can entail gate keeping where students are selected or rejected on the basis of norm or
criterion referenced standards. Assessment is about diagnosing the quality of learning so
that learners strengths can be identified and weaknesses addressed. It can also involve
fostering change both in the learner and the teacher as they both seek to negotiate their
symbiotic relationship (Rowntree 1987). All of this points to the fact that when
considering assessment one cannot reduce it to the common duality of formative and
summative functions. However, a detailed exploration of the many factors embodied in
student assessment is beyond the scope of this paper, instead this paper will focus on the
connection between assessment and learning while not forgetting that the practice of
assessment may encompass more than just its association with learning.
This paper will show how a selection of assessment methods, in a PBL setting can be
used to both foster and measure learning that can be characterised as “understanding”. In
reporting the effects different types of assessment methods have upon learning in PBL, I
will argue that assessment needs to be holistic in nature, meaning a raft of assessment
tasks should be used collectively when fostering and measuring understanding.
Learning is Understanding
Everyone agrees that learning is the object of education; however many differ in their
conceptions of learning. Dewey (1972) criticizes conceptions of learning that define
learning as “fragments” and “dualisms” which are simply synthesised. Rather, Dewey
describes learning as a habit formation that arises from the continual reconstruction of
existing dualisms into entirely new unitary functions and understandings.
Recent findings about the brain architecture and cognition support Dewey’s view of
learning. Researchers (Lakomski 2002, Minsky 1986) have suggested that the brain does
not appear to operate in a linear processing fashion of matching individual stimuli to pre-
existing pieces of information. Those advocating connectionism contend that learning
occurs when neurons, that make up neural nets, fire in certain unique patterns. These
connections form a pattern that can be strengthen through regular activation and
recognition of these patterns. In other words learning encompasses more than absorbing
or identifying discrete bits of information and then mapping them against other similar
bits of information, rather it is the interaction of a new idea with the sum of all that we
know, to the effect of transforming existing cognitive or neural structures.
Situated cognition (Brown, et. al. 1989) further suggests that an activity in which
knowledge is developed and deployed, is not separable from or ancillary to learning and
cognition. Rather, it is an integral part of what is learned since every situation triggers a
unique sets of connections that have not already been experienced by the mind.
Every act of speaking, every motion of the pen, each gesture, turn of head, or any
idea at all is produced by the cognitive architecture as a matter of course, as a new
neurological coordination (Clancey, 1993, pp. 111-112).
The familiarity of cognitive patterns is often characterised as prior knowledge. While
prior knowledge influences the way we make new meaning, there is also the potential for
new understandings to transform existing thinking structures leading to new conceptions
of the world. In other words prior knowledge that is brought to bear is itself constructed,
rather than retrieved intact from memory when it’s brought to bear on new situations.
Social constructivists add that learning is not only a process that occurs in the mind of the
learner, but it is negotiated through the interaction with others (von Glasserfield 1988).
How people make sense or understand the world and themselves is a broader view of
learning encompassing various cognitive perspectives (Bruner 1990). Fosnot (1996)
argues that if learning is about sense making then the goal of instruction is not to shape
behaviours or skills but rather to influence conceptual development and deep
understanding.
A conception of learning that focuses upon “understanding” has serious implications for
assessment.
Learning for Understanding: Implications for Assessment
Assessment has for a long time been recognised as a powerful driver of learning, and
therefore something that teachers could leverage on to ensure students achieve desired
objectives. Unfortunately however, this belief about assessment, has contributed to,
inadvertently perhaps, assessment grades being used as a carrot or stick resulting in
learning for the sake of its external value and not the intrinsic value of understanding. In
one study an examiner lamented:
In an ideal world quantitative marks would be abolished…this is because students
tend to see their mark as a valuation of their worth (Warren Piper et.al. 1996:83)
This belief that assessment drives learning focused on the external value of learning has
helped to sustain simplistic conceptions of learning and assessment. In an outcome based
system focused on the external value of learning what has become important is the
performance on the test rather than the learning that facilitates doing well on a test. In
these instances learning has become subject to the instrumental ideals of efficiency,
accountability, that which is easily tested and measured, and the ranking of individuals
and/or institutions. In this context assessment has developed into a quality assurance
mechanism that validates the transfer of measurable knowledge and skills from the expert
to the novice. Furthermore, assessment as an objective judgement has helped to drive the
idea that assessment, learning and teaching are all separate activities (Huba and Freed
2000).
However, as outlined above there are alternative views of learning and along with these
views a commitment to re-defining of the role of assessment in learning:
Assessment is the process of gathering and discussing information from multiple
and diverse sources in order to develop a deep understanding of what students
know, understand and can do with their knowledge as a result of their educational
experiences; the process culminates when assessment results are used to improve
subsequent learning (Huba and Freed 2000:8)
Herrick (1996) in his summary of Dewey’s views on assessment suggests:
Assessments are valuable when they are "low stakes," that is, when students and
teachers use them to clarify achievement, interest, and aptitude for the purpose of
stimulating new learning. Assessments are valuable when they are authentic to
life experiences as much as possible, thus encouraging the integration of
vocational and general education. Assessments are valuable when they promote
positive consequences for students, thus helping them form new and more
meaningful habits. Assessments are valuable when they do not separate students
into performance categories, but rather help students better understand their own
growth. Assessments are valuable when they match instructional aims, which, in
turn, match societal aims for education. In short, assessments are valuable when
they expand opportunities for students rather than limit opportunities.
Dewey (1980) Perry (1970) and Biggs and Collins (1982) all point out in their studies of
student intellectual development that assessment focused on feedback is essential for the
development of a deep understanding. Assessment is so critical in learning for
understanding that the processes of teaching, learning and assessment cannot be easily
separated. Rowntree (1987:24) perhaps best locates assessment when he describes it as
the “life-blood of learning”.
Assessment rather than being a driver of learning outcomes, external to understanding,
assessment can be an enabler for understanding. The difference is significant. As an
enabler for understanding assessment becomes part of the teaching and learning process
by virtue of the fact that understanding is a process and not simply a product that can be
objectified and easily measured. Furthermore, assessment as an integral part of the
learning process requires a shift away from teaching models that define learning in terms
of discrete bits of information that are dispensed and passively absorbed, to models that
provide opportunities for the student to want to make sense and understand the world
around them (von Glasersfeld, 1996).
This different conception of assessment requires a re-think about the purpose of different
assessment methods. Methods that focus on measuring discrete or observable behaviours,
the hallmark of traditional examinations (Warren Piper et.al. 1996: 10-13), may not be
appropriate for assessing understanding. Numerous studies have shown that many current
approaches to assessment are not encouraging students to learn in a deep fashion
(Ramsden 1992: 181-213) or are failing to assess deep learning or understanding. One
example is a study by Nurrenbern (1987) who reported how 88% of students tested using
the algorithmic questions were able to correctly answer but when tested conceptually
only 31% were able to respond appropriately. The researchers also concluded that
success on traditional exam type questions did not necessarily indicate conceptual
understanding.
PBL offers an alternative approach to learning where assessment is an integral element in
both the facilitation and measurement of understanding.
Understanding: the Promise of Problem-based Learning
The underlying principle in PBL is "all learning begins with a problem". The problem
provides the direction of learning, the motivation for learning and the application of
learning. (Barrows & Tamblyn, 1980; Boud & Feletti, 1991; Schmidt, 1983). The appeal
of PBL is its enormous potential for developing understanding since encapsulated in the
PBL are explicit expectations that students:
• explore knowledge concepts within different contexts;
• articulate what they already know about a problem (prior knowledge)
• identify and then find information in respect to what “they don’t know”;
• specify how new information connects with prior knowledge;
• share and test the viability of new conceptions; and
• reflect how they personally constructed knowledge and became meaning makers
(meta-cognitive). (O’Grady 2002)
The potential for developing understanding through PBL, is further enhanced by a focus
on not only developing cognitive awareness (meta-cognition) but also in engaging the
students in “learning for the whole person” (Heron 1993). Two examples of this holistic
approach to learning are an emphasis on developing general dispositions of the learner
which Kay and Bawden (1996) suggests includes, “itch to know” (propositional), “itch to
do” (practical), and “itch to be” (experimental) and, the importance of working in a
collaborative setting (Rhem 1998).
But how does assessment work in PBL? Nowak and Plucker (1999) point out the need for
alignment between the intentions of PBL and assessment. Studies have shown that when
standardized tests are used, students doing PBL generally do not score as well as those
being taught using traditional teaching methods. However, when non-standardized forms
of assessment are employed, students doing PBL are at least as good as other students
(Vernon & Blake, 1993) and when you account for long-term retention, PBL students do
better (Farnsworth, 1994).
Learning defined as understanding requires assessments that focus on the process of
developing understanding and not just the products of thinking (Nightingale et. al. 1996).
Assessment that focuses simply on what content knowledge students posses may fail to
measure the way in which students have reasoned out their answers and hence whether
the answer indeed represents understanding. Or worse still it sends a message to students
that the inquiry process does not really matter as long the “correct” answer is reproduced.
Leaving aside the epistemological problems of right or wrong answers, assessment of
answers without checking how ideas are understood and synthesised together in order to
derive answers, also makes it difficult for the assessor to determine what are the
difficulties or impediments in understanding. While the assessment of content is
necessary the intention in PBL is to evaluate how well understanding has been developed
such that meaningful responses to problems can be generated. In PBL with the emphasis
on students learning how to learn (how to understand) students use the context and
experience of working on a problem to demonstrate how they develop an understanding
of ideas so they can meaningfully apply these ideas to a problem. Assessment therefore is
just not focused on the application of knowledge but also the process of meaning making
so that meaningful application is possible.
PBL at the Republic Polytechnic
PBL at the Republic Polytechnic is used as the exclusive educative approach. PBL at the
Republic Polytechnic has a number of distinct features including a one-day, one problem
cycle, one facilitator for 5 teams of 5 students per class, an integrative curriculum and a
holistic approach to grading and assessment (O’Grady 2002). One of the distinctive
features is the requirement that facilitators assess students’ performance daily and give
them both formative and summative feedback. In order for staff to assess students in
ways meaningful so that it both fosters and measures understanding, staff are expected to
collect evidence about students understanding. Facilitators, with the help of technology,
collect evidence which allows them to make an overall assessment about students
understanding. The methods used to collect evidence about understanding at the Republic
Polytechnic include:
• Team Presentations
• Peer & Self Assessment
• Learning Journal
• Tests and Quizzes
• Observations of student discussions
Samuelowicz (1994) points out that it is not the assessment task which is critical in
developing understanding but the context in which these methods are put into practice.
Each of these methods in the context of PBL have an important instructional and
assessment dimension in the development of understanding. For the remainder of the
paper I shall explore how this approach to PBL, premised on students developing
understanding, employs these methods of assessment (individually and collectively) to
both foster and measure learning for understanding.
Team Presentations
In the Republic Polytechnic students are expected each day to develop a presentation that
is in response to the problem they have been presented. The formal presentations are a
culmination of a team of five students’ research and discussions. The presentation is
shared with other teams and are compared and critiqued by their peers (who also offer
presentations) and the facilitator. Student presentations are extensively used in many PBL
courses (see table 1)
While presentations have long been advocated as a valuable instructional and assessment
process, the emphasis is often upon the development of communication skills (Hay
1994). However, the value of the presentation is more than the practice of
“communication skills”. The function of presentations at the Republic Polytechnic is to
give a focus for the students collaborative efforts in expecting the team to organise and
explain their ideas.
A pupil does not really know what he has learned till he has organised it and
explained it to someone else. The mere recognition of what is right in someone
else’s wording is only the beginning of the awareness of truth. (Rowntree
1987:65).
Table 1: PBL courses using class presentations
• Northern Arizona University College of Education
//jan.ucc.nau.edu/~lam37/eci350/syllsprg04.htm!
• Dept of health management , University of Missouri ,Columbia
//hmi.missouri.edu/course_materials/Residential_Informatics/semesters/W2000_Materials/401_ha
les/MI_401_sec2.htm!:
• University of Hong Kong sociology department
//www.hku.hk/sociodep/courses/soci8004_03outline.doc!:
• University of Maryland School of Pharmacy
//rxsecure.umaryland.edu/courses/PHAR540/assign.htm!;
• Indiana state university //www.indstate.edu/thcme/PSP/pbl.html!:
• :Department of Mechanical Engineering, Hong Kong Polytechnic University
//tlrc.edc.polyu.edu.hk/html/theme/pbl_casestudy/ChuenChunWah.html#8!
• Cornell medical college //class2004.med.cornell.edu/applicants/pbl.html!
There are numerous studies on the positive effects of collaborative learning, one example
reports that students who collaborated in the preparation of a shared assignment tended to
engage in higher level cognitive strategies than those who worked by themselves (Tang
1998).
There are two major challenges in using classroom presentations in the instruction and
assessment of student understanding in PBL. The first is the establishing of criteria that
specifies the expectations of students’ presentations (Hay 1994). Without criteria students
are unsure of what is expected of them. At RP we have developed criteria (see table 2)
that specifies the expectation as to what would demonstrate understanding.
Table 2:Criteria for Student Presentations
• Statement of strategy for proposed solution to problem
• Information collation and analysis
• Rationale for problem solving strategy (statement of how the solution was reached)
• Ability to address questioning (for justification of proposed solution)
• Ability to compare and analyse multiple solutions/perspectives
• Communication or presentation skills (spans the other five dimensions)
The second challenge relates to the more fundamental problem that any presentation or
articulation of ideas is limited in its capacity to represent ones understanding, since what
we know and understand is more than the words or symbols we use to represent it
(Lakomski 2002 ). Given the limitations of representations of knowledge, a prepared
presentation should be regarded as the tip of an ice-berg in respect to what students
actually know. In PBL the facilitator (and peers) will probe further on the insights, in
particular how these insights relate to the problem trigger. While this interactive
questioning and probing helps to avoid presentations being unmindful collections of
information (information that is “copied and pasted” without a real sense of how the
information relates to the problem), it also affords the opportunity for students to build on
their insights and further extend their understanding by constructing responses or
questions.
Peer Assessment & Self Assessment
Peer assessment is the act of students assessing one another (Boud et. al. 2001). Self-
assessment is judging the quality of ones own work. While there is some debate about
whether self and peer assessment is workable in PBL (Woods 1994), there is little doubt
that there is immense value in helping students to develop an understanding and
judgement of the quality of their own and the work of others.
At the Republic Polytechnic students are constantly asked to make informal judgements
about one anothers’ comments and team presentations. However, in a more formal sense
(formal meaning recorded) students are asked to complete a peer and self assessment
questionnaire. These questionnaires are constructed using research on the behaviours that
facilitate student centred discussions. Research (Shoop 2000) has shown that these
behaviours underpin meaningful collaboration which is a major premise for
understanding in PBL. By asking students to regularly reflect and hold themselves and
each other accountable for these behaviours, it serves to both instruct what behaviours are
desirable and also provides information that maybe useful for the facilitator in making an
assessment of students’ understanding.
There is a good deal of scepticism about the validity of peer and self assessments
(Nightingale et. al. 1996: 123-124) and there are staff at the Republic Polytechnic who
share this scepticism. Some staff perceive students as not taking the self assessment
seriously and randomly selecting ratings in the self & peer assessment.
A random selection of 50 students’ peer evaluations show that students were making
distinctions from individual to individual. The ratings were analysed to see if the
variations were due to random selection by comparing the rating of peers to overall daily
grade. What we found was that while the correlations were relatively low, they were
statistically significant suggesting that there was more than just a random selection of
ratings.
Another concern is that students, knowing this assessment is taken into account in the
summative judgement of their understanding, use this tool to rate themselves and their
peers in the best light possible (or in the worst possible light if they do not “get on”). This
relates to criticism of peer and self-assessment centres as being less accurate than expert
judgements. A comparison between peer and self assessment and the facilitators
judgement about students performance showed a fairly large difference between the two
judgements. However, when comparing self and peer assessments, peers tend to rate
others higher than their own evaluation suggesting students were not simply rating on the
basis of self interest. One feasible conclusion from the analysis is that students are still
coming to terms with peer and self assessment and while there may be occasions where
they overstate their abilities, when compared with expert judgements, there is a
demonstrated willingness to make distinctions between various performance between
peers and between self and peers.
The value of peer and self assessment extend beyond the validity of peer and self-
assessments to include the organic experience of trying to develop an understanding of
one self and others work. Facilitators signal the importance and help students in
developing judgement about their own and others performance by taking peer and self
assessment into account when assessing students.
Learning Journal
A learning journal is for students to record written reflections about learning. The act of
writing helps to organise thoughts and facilitate understanding. Donald Schon (1983)
coined the term reflective practitioner to describe how professionals think in action. The
idea of reflection has since been used to describe an important process in learning. Jim
Butler’s (1994) model of human action and learning places reflection at the centre as “the
open, active communication channel between the social context and the inner self”.
Reflection is the crucible or catalyst which transforms “public knowledge” – theories,
book-knowledge, received as rules and codes – into “PPK – Private practical knowledge
– “what I know and use”. PPK is constantly enriched and renewed by vivid experiences
and can be transformed by paradigm shifts and value re-alignment. The most recent
layers can be examined, but much of it is probably so deeply embedded as to be
unavailable for conscious recall. Using PPK in the public domain of professional
practice is possible again only through reflection.
Reflections journals are recommended as an important assessment method in PBL
(Woods 1994). At the Republic Polytechnic students are expected to keep a reflection
journal for each of their modules and record an entry each day. Upon the completion of
each day facilitators at the Republic Polytechnic provide a question that students can
choose to respond to in their reflection journal. Students then record their thoughts about
the day and the problem they have worked on. The expectation for students as to what
constitutes a meaningful reflection is expressed in the criteria facilitators use to assess the
journal.
Table 3 Criteria for Reflection Journal
• Clarity in reasoning
• Strength to take and express a stance
• Evidence of milestones
• Awareness of learning preferences
What is expected of students extends beyond merely describing the events of the day or
listing the contents encountered while they were working on the problem. Students are
expected to articulate how they have personalised public knowledge or made sense of
what they have done during the day. A content analysis of 50 random selected journal
entries revealed that the majority of the entries (38 out 50) contained personal insights
about how they are learning and the personal significance of what it is they have learnt
rather than just descriptions of the day’s events.
Tests and Quizzes
In many PBL courses various types of invigilated tests are used. Despite the criticism
levelled against tests, particularly objective tests where answers are predetermined,
multiple choice questions (MCQs) under semi-formal test conditions can be used to
assess reasoning processes (Gibbs 1992b). However for MCQs to be effective in
fostering learning for understanding, it is important that the questions focus on requiring
more than recall. While recall is an important learning process, by itself it may not be a
very good indicator of reasoning and understanding. The challenge of using MCQs is that
in selecting an answer the facilitator may not be aware of the students’ reasoning process
and is forced to infer the reasoning through the selection of the various responses. While
all assessment requires some inference, the lack of gradation in the measurement of
understanding or misunderstanding makes this inference more difficult than other
assessments which allow students to express the varying quality of their understanding.
At the end of each day, students at the Republic Polytechnic take a short quiz that focuses
on understanding of specific aspects of content that was addressed throughout the day;
the quiz is an opportunity to compare their understanding with the understanding of a
subject expert. Students are generally given detailed verbal feedback about their
responses in class. Given the quiz is completed at the end of the day it is perhaps not
surprising that in a review of a 50 randomly selected students that majority scored fairly
well on their daily quizzes. While there has been some concern about collusion, given the
tests are not invigilated like more formal exams, the quiz has become regarded as a
practice for the more formal regular understanding tests which are conducted under
invigilated conditions. Facilitators generally agree that this is one method of assessment
that needs to be considered in light of the other forms of assessment when evaluating the
validity of the answers both from the point of view of understanding and reliability.
Observations of student discussions
The role of the facilitator in PBL is to observe and guide students learning and
discussions (Barrows and Tamblyn 1980, Barrows 1988). The facilitator assists student
learning through guidance at the meta-cognitive level. It is the facilitator’s expertise in
this process, not in the content areas in which the students are studying, that is important.
The students are expected to acquire the knowledge they need from content experts who
serve as consultants, as well as books, journals, and automated information sources. The
facilitator guides the students through repeated practices in reasoning and gives
encouragement and advice on how to enhance their ability to do self-directed study. In
the course of this guidance the facilitators are able to observe students working through
these processes and are able to formulate judgements about students understanding based
on their engagement with the processes of learning e.g. problem definition, identification
of learning issues, reading, discussing, testing theories or possible solutions etc.
Facilitators at the Republic Polytechnic observe students in the classroom. Huba and
Freed (2000) suggest facilitators see the learning process further allowing facilitators to
learn how students learn. However, facilitators’ perspectives of the learning process can
influence what they observe and fail to observe (Pratt 1998). A group of facilitators were
asked what they observe when facilitating. In the table 4 below are three different
response that represent a range of dimensions facilitators observe when in the classroom.
Table 4 What facilitators observe in the classroom
Facilitator 1
Student’s level of engagement with the problem
• Motivation level
• Whether he/she is genuinely trying or just putting up a show
• Questions that they ask would indicate level of understanding
• Their response to questions asked
• Amount of progress from last week
• Can students break down the problem
• Whether they are able to explain the problem in a coherent way
• Whether they are able to explain their solution (even if it is wrong), in a coherent way
• Whether they actively try to solve the problem that is stopping them or do they just give up
• Whether once the problem is solved do they try to pursue a better solution
• What they have found out and how they present it
• Whether it is individual work or copied
• If they got help, I would look out for coherence in their explanation, at the same time
• Though they got help !I would check if they implemented what they have learnt from their
• Friends or is it just a relatively blind/vague copy to get pass the day
• Ability to coach others
Facilitator 2
• Peer Evaluations
• How they work in their teams (when I go around or peek around or sudden checks)
• Punctuality
• Bonus for additional work/initiatives
Facilitator 3
• The quality of questions asked by students
• Ability to explain or defend their points when challenged by facilitator or other students
• Showing understanding of the day's topic by being able to apply it to different situations
• RJ has to!demonstrate understanding of the day's!lesson. !
• Good behaviour in class (ie not being late or sleeping in class, being respectful when other teams are
presenting)
• Showing progress from 1st meeting to 3rd meeting
• Quality of class participation and team participation
• Interesting/creative solutions (eg making a movie instead of powerpoint) are a bonus.
These dimensions may reflect on the one hand the personal bias different facilitators have
in what they choose to observe but they also each reflect the different contexts with
which facilitators work in. However, one could make an argument that the emphasis in
most cases is upon observing understanding in a broader sense than just what information
they are able to recall.
So far I have specified the different assessment methods used in the daily assessment at
the Republic Polytechnic. Each of these assessment methods despite their weaknesses
and strengths is a means of fostering learning for understanding as well as collecting
information or evidence about the quality of students understanding. So far I have
emphasised the formative nature of assessment. However, facilitators are also expected to
use this information to derive a summative judgement – a daily grade. The challenge in
assessment for learning (PBL) is how to reconcile formative and summative judgements.
I have already alluded to the potential for learning to be undermined by the assignment of
a grade that can be interpreted as an outcome that appears to be external to the process of
learning. So how can a summative assessment be made without their being an adverse
affect (often unintended) on learning for understanding? One way of reconciling the
formative and summative elements of assessment is to employ a holistic approach.
Holistic Assessment
Holistic assessment entails using a number of assessments and then combining these
assessments using a rationale that is defensible to the objective of assessing
understanding. Or in other words employing a “professional judgement” in the
assessment of students. Professional judgement implies a professional body to which the
judgement can be ascribed. Warren Piper (1994) suggest professional bodies have a
mastery and shared view of a body of knowledge, have a commitment to the needs of a
cliental, and sense of responsibility for decisions made by individuals counted as
members of the profession. Warren Piper goes on to ask the question whether teachers
are professional? As earlier stated there are competing perspectives suggesting a wide
array of teaching practices (and malpractice). Perhaps the best that can be hoped for is
that within certain perspectives or theoretical positions there are shared views about
learning, the learner and the responsibilities of a teacher, however this seems somewhat
narrow and would undermine the idea of academic standards that are agreed upon, at
least notionally across higher education systems. So what is needed is a organising belief
or philosophy that teachers can judge good teaching and learning and hence inform the
exercise of “professional judgement” particularly in the realm of assessment where the
“professionalism” of a teachers is open to greatest scrutiny. In this section of the paper I
describe how learning for understanding as a principle for an holistic approach to
formative and summative judgements in PBL, is an attempt to be professional.
For formative assessment facilitators at the Republic Polytechnic are expected to use the
information as it is collected to guide the ongoing facilitation process. Some information,
collected informally during class: observations and team presentations, may be used
immediately to prompt a question, comment or deliberate silence by the facilitator. Other
information, available only at the end of the day after students have submitted them (
reflection journal, quiz, peer and self assessments, along with evidence from observations
and team presentations) may be used to inform facilitation in subsequent problems. The
information is also used to formulate detailed written feedback for students which is
attached to students submitted assessment. The comments are relatively brief but are
meant to focus on specifying how students can further develop their capacity to
understand (make sense of the world around them). The critical decision the facilitator
needs to make is given all the information they have collected what do they comment on?
It is simply not possible for the facilitator to respond to all the information they have
about a student. So they must choose what comment would best help the student develop
their capacities for understanding. This requires the judgement of the facilitator after
considering what she/he knows about the student.
For summative assessment the facilitator is expected to derive a daily grade that will be
used along with other daily grades and performance on a series of understanding tests to
compute an overall module grade. Facilitators are once again faced with the problem of
having to try and summarise a significant amount of information into broad grading
bands. To encourage a professional judgement based on the knowledge known about the
student no weightage of evidence or algorithm for calculation of the grade is given to the
staff. What is expected of facilitators, is that like doctors, who are required to make a
diagnosis based on pieces of information they collect using different tests (blood pressure
tests, x-ray, throat examination etc), they should judge each person on the merit of the
evidence at hand. How they determine which pieces of evidences are regarded as being
critical for each student is left up to the professional judgement of the facilitator.
This holistic approach to judging allows facilitators to judge students understanding
according to however the student is best able to demonstrate, and facilitator interpret,
understanding (or misunderstanding). This is not an easy task because the evidence
collected from the different assessment methods may paint a complex picture of varying
degrees of understanding on issues related to the problem. In fact sometimes the evidence
is perceived to be contradictory. However, all this has simply reinforces the idea that
understanding is a complex process that is not easily represented and interpreted.
Given the complex nature of understanding as learning, it seems reasonable to argue that
facilitators need to collect as much evidence as possible. One assessment method (a 3
hour invigilated exam) may produce a more manageable picture of students performance
but it may fail to recognise the complexity of thought and seduces examiners into
thinking understanding is easily tested and measured. Worse still it may give students the
impression that learning is simply the sum of the expectations and requirements expected
from that single test.
A holistic approach is sometimes criticised as being subjective and hence inherently
unfair because the facilitator is judging each student differently. However, I would argue
that this differentiation in judgement is precisely what makes the formative and
summative judgement fair. Students in holistic assessment are still judged against general
criteria and the objectives of the problem but in the measurement and fostering of
understanding the uniqueness of each student in how they understand is contextualised
into the judgement.
Formative assessment is not undermined by holistic assessment, since students are
afforded multiple opportunities to demonstrate their understanding and get feedback on
how to improve. Also by considering each piece of assessment in relation to other pieces
of assessment facilitators can construct a more complex picture of students understanding
and respond in more meaningful ways rather than relying on inferences that maybe more
tenuous because they are based on assessment that has less dimensionality. Summative
assessment or the measurement of learning and its summation as a grade is not weakened
by holistic assessment since multiple assessments help to ensure greater defensibility in
the determination of the grade. The triangulation and verification of the various pieces of
evidence about understanding can help the facilitator determine a grade that suits the
criteria associated with each grade band. Students in receiving the grade will have
attached to it substantive formative feedback that not only indicates how they can
improve but the rationale as to why they have received their specific daily grade.
Conclusion
Learning defined as deep understanding has serious implications for how we both teach
and assess. I have suggested that the processes of PBL lends itself well to the definition
of learning as understanding because assessment is regarded as an integral element in the
facilitation of learning. In reviewing each of the assessment methods used for the daily
assessment of students at the Republic polytechnic I have tried to demonstrate how they
have both an instructional and measurement element. Furthermore I have argued that by
taking a holistic approach to summative and formative functions of assessment it can
reconcile the potential conflict between fostering learning and the measurement of
learning.
References
Barrows, H. S., Tamblyn R. M. 1980. Problem-Based Learning: An Approach to Medical
Education. Springer, New York
Barrows, H. S. 1988. The Tutorial Process. Southern Illinois School of Medicine, Springfield,
Illinois
Biggs, J. B. & Collins, K.F. (1982) Evaluating the Quality of Learning: The SOLO Taxonomy.
New York, Academic Press.
Boud, D. Cohen, R, and Sampson, J. (2001) Peer learning and assessment In Peer Learning in
Higher Education. London, Kogan Page. p67-81.
Boud, D. & Feletti, G. (1991). The Challenge of Problem Based Learning. London. Kogan Page.
Brown, John Seely, Collins, Allan and Duguid, (1989) Paul Situated Cognition and the Culture of
Learning Educational Researcher; v18 n1, pp. 32-42, Jan-Feb.
Bruner, J.S. (1990). Acts of meaning. Cambridge, MA: Harvard University Press.
Butler, Jim (1994) From action to thought: The fulfilment of human potential. In J. Edwards (ed.)
Thinking International Interdisciplinary Perspectives. Melbourne, Hawker Brownlow, pp. 16-22.
Carter, Annie and Palermo, Josephine (2004) Reflections on Learning and Self Assessment: A
Case Study of Problem Based Learning //www.tamil.net/people/indy/Carter.htm
Clancey, W. J. (1993). Situated action: a neuropsychological interpretation. Response to Vera and
Simon. Cognitive Science, 17, 87-116.
Dewey, J. (1980). Thinking in education. In J. Boydston (Ed.), John Dewey: The middle works,
1899-1924: (Vol. 9, pp. 159-170). Carbondale and Edwardsville, IL: Southern Illinois University
Press. (Original work published in 1916).
Farnsworth, C. C. (1994). Using computer simulations in problem-based learning. In M. Orey
(Ed.), Proceedings of the Thirty-fifth ADCIS Conference (pp. 137-140). Nashville, TN: Omni
Press.
Fosnot, C. (1996). Constructivism: A Psychological theory of learning. In C. Fosnot (Ed.)
Constructivism: Theory, perspectives, and practice, (pp.8-33). New York: Teachers College
Press.
Gibbs, G. 1992a. Improving the Quality of Student Learning. Technical and Educational Services,
Bristol.
Gibbs, G. 1992b. Assessing More Student. Oxford, Oxford Centre for Staff Development.
Hay, I. (1994) Justifying and applying oral presentations ing geographical education. Journal of
Geography in Higher Education. 18 (1) 43-55.
Heron, J. (1993) Group Facilitation London, Kogan Page.
Herrick, Michael J. (1996) Assessment of Student Achievement and Learning, What Would
Dewey Say? A 'Recent' Interview with John Dewey. Journal of Vocational and Technical
Education. Volume 13, Number 1.
Hoffman, Bob., Jones, Diana., Lewis, Dave., Lucas, Gail. and Ritchie, Donn. (1996) Assessment
of problem based learning; students and classes: CSU Instructional Technology Initiatives
//edweb.sdsu.edu/clrit/webassess/studentNclasses.html
The California State University
Huba, M.E. and Freed, J.E. (2000) Learner-Centered Assessment on College Campuses Boston:
Allyn and Bacon.
Maudsley, G. (1999) Do we all mean the same thing by Problem-based Learning? A review of the
concepts and a formulation of the ground rules. In Academic Medicine, 74 (2). 178-184.
Kay, R. and R. Bawden (1996). “Learning to be systematic: some reflections from a learning
organization.” The Learning Organization 3(5).
//www.emerald-library.com/brev/11903ec1.htm
Lakomski, G. (2002) How we think – Modern cognitive science and what it tells us about
teaching and learning. Proceedings of the Second Symposium on Teaching and Learning in
Higher Education, CDTL, National University of Singapore. 4-6 September 2002.
Minsky, Marvin (1986) The Society of Mind New York, Simon and Schuster.
Mohanan K.P. (200 4) PBL in Perspective, CDTL, The National University of Singapore
//www.cdtl.nus.edu/publications/pbl/default.htm Downloaded: Feb 15 2004
Nightingale, P., Te Wiata, I., Toohey, S., Ryan, G., Hughes, C., and Magin, D. (1996) Assessing
Learning in Universities (1st ed.). Sydney: University of New South Wales Press.
Nowak, Jeffrey A. and Plucker, Jonathan A. (1999) Do as I Say, Not as I Do?
Student Assessment in Problem Based Learning Indiana University
//www.indiana.edu/~legobots/q515/pbl.html
Nurrenbern, S.C., & Pickering, M. (1987). Concept learning versus problem-solving: Is there a
difference? Journal of Chemical Education, 64(6), 508-510
O’Grady, Glen & Alwis, W.A.M (2002) One Day, One Problem: PBL at the Republic
Polytechnic 4
th
Asia Pacific Conference in PBL. Hatyai, Thailand December 2002
Perry, W.G. (1970) Forms of Intellectual and Ethical Development in the College Years: A
Scheme New York: Holt, Reinehart and Winston.
Pratt, D. and associates (1998) Five Perspective on teaching and Learning in adult Higher
Education. Florida, Krieger Publishing.
Ramsden, P. (1992) Learning to teach in Higher Education. London, Routledge.
Rhem James (1998) Problem-Based Learning: An Introduction The National Teaching &
Learning Forum on-line edition December Vol. 8 No. 1
Rowntree, D. (1987) Assessing Students: How Shall We Know Them? (2
nd
revised ed.). London:
Kogan Page.
Samuelowicz, K. (1994) Teaching conceptions and teaching practice: a case of assessment. In
Phenomenography – Philosophy and Practice. Brisbane: Queensland Uiversity of Technology.
(343-354).
Schon, Donald (1983) The Reflective Practitioner: How professionals think in action New York,
Basic Books.
Shoop, Linda (2000) Student-Centred Discussion Workshop Notes Ngee Ann Polytechnic,
singapore May 25
th
and 26
th
2000.
Schmidt, H.G. (1983). Problem-based learning. rationale and description. Medical Education, 17,
11-16
Tang, C. (1998) Collaborative Learning and Quality of Assignments in Teaching and Learning in
Higher Education. B. Dart & G. Boulton-Lewis (eds.) ACER. Camberwell, Victoria.
Vernon, D.T. & Blake, R.L. (1993). Does problem-based learning work? A meta-analysis of
evaluative research. Academic Medicine, 68(7) 550-563.
von Glasersfeld, E. (1996).Introduction: Aspects of constructivism. In C. Fosnot (Ed.),
Constructivism: Theory, perspectives, and practice, (pp.3-7). New York: Teachers College Press.
von Glasersfeld, E. (1998).Cognition, construction of knowledge and teaching. (ERIC Document
reproduction Service No. ED 294 754.
Warren Piper, D., Nulty, D. D., and O'Grady G. (1996) Examination Practices and Procedures in
Australian University. Higher Education Division, Department of Employment Education and
Training. Evaluations and Investigations Program. AGPS, Canberra.
Warren Piper, D. (1994) Are Professors Professional: the organisation of University
examinations. London, Jessica Kingsley Publishers.
Woods, D.R. (1994). Problem-based Learning: how to gain the most from PBL. Waterdown,
Canada