As the weeks passed, I tried to connect with Kevin, but he was indifferent to my attempts. He only shrugged or ignored me when asked questions and refused to do his work. I began to wonder how he was going to manage when learning centers started.
To start, I set up five centers: lab, science skills, reading, technology and a makerspace. The idea was to have students spend an entire block of class time at a single center, completing a content-based activity or open-ended prompt. Over the course of two weeks, students would alternate each day between being at a center and being in direct instruction. By the end of the cycle, they would have completed four centers and spent four days with me. The other two days are free for whole-class meetings, assemblies, remediation, larger labs or other activities. Students would build a portfolio of center work, add a reflection and I would have the perfect assessment, tailored to each student.
To get ready for the centers, I spent the first three weeks going over lab safety, reiterating classroom expectations and guiding the class through an entire cycle so they could see what each center was all about.
Over the following months, Kevin slowly came out of his shell, producing increasingly impressive work in each center. He still struggles during direct instruction, but when he is given a prompt, materials to design with, and space to learn his own way, he continues to surpass many of his peers.
I spent a lot of time thinking about why this model works so well, not only for students like Kevin, but for everyone. I think it comes down to students being responsible for their own learning and realizing that no one is going to stand behind them, forcing them to learn. In their reflections, students answer three questions: Which center are you reflecting on? Why did you choose that center? What did you learn about the content or yourself at that center?
This story is part of an EdSurge Research series about how personalized learning is implemented in different school communities across the country. These stories are made publicly available with support from Chan Zuckerberg Initiative, which had no influence over the content in this story. (Read our ethics statement here.) This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The idea for this assignment came to me while I was reading articles on teaching undergraduate psychology classes: I discovered PeerWise (Denny et al., 2008), a free web-based platform that allows students to create, share and evaluate multiple choice questions related to course material. I decided then to design an assignment where I turned the tables: instead of me giving students questions to answer, I assigned students the task of creating questions for each other to answer.
Students create six multiple choice questions related to the course material. Each question has to include one correct answer and a minimum of three distractors. Students provide an explanation for the choice of correct answer and reasons why the distractors are incorrect.
Students then answer 20 questions created by their peers, rate them in terms of difficulty (easy, medium, hard) and quality (with a numeric rating), and provide constructive comments to the author. All of this is done anonymously, which promotes broader student participation in conversations. Note that, as the instructor, I can identify the students and intervene in conversations where students are clearly struggling with concepts or where questions are inappropriate. The platform allows students to filter questions: they can choose which ones to answer based on popularity among peers, level of difficulty, and number of comments received. Most importantly, students can filter questions by topic in order to hone in on areas where they feel they need more practice.
And do students use the assignment to prepare for course exams? The graph below shows PeerWise activity for one of my classes. Students clearly used the assignment to prepare for exams. However, in their feedback, many students indicated that it did not actually help them obtain better grades on the exams. With proper ethics approval, it would be worthwhile to find out whether or not that was really the case.
The assignment has gone through some revisions over the course of three years and five classes where I have used it. Live and learn. Here are some of the updates that I thought are worthwhile sharing:
The American Educational Research Association has also cautioned against the use of VAM in teacher evaluation.2 Even the Bill & Melinda Gates Foundation, a major proponent of VAM, has backed off its initial support.3
While there are conditions and limits, states now have an opportunity to reshape their assessment and accountability systems, which in turn could lead to more fundamental changes in public education as a whole. We encourage them to consider implementing performance-based assessment. Given that the tide is turning away from testing for accountability purposes, a time-honored approach that has worked for students and teachers in New York deserves a second look.
The Consortium, a coalition of nearly 40 public high schools, has shown that acquiring academic knowledge and skills requires helping students engage with the power of ideas. The Consortium schools rely on a constructive assessment system that grows out of curriculum, respects teachers as the professionals they are, and initiates collaborative projects with other groups of teachers and schools as opposed to the competitive structures set up by past federal education policies such as No Child Left Behind and Race to the Top.
Responding to this changing education landscape, the public schools formed the New York Performance Standards Consortium and reached out to the United Federation of Teachers, as well as the parent community and other political allies, to gain support. The move led to a direct confrontation with Mills and his allies in the state Department of Education.
The Consortium continues to support a teacher-designed, student-focused, and externally reviewed assessment system that provides a fuller and deeper measure of student achievement than standardized testing.
In building and sustaining its approach, Consortium educators understand what subsequent years of headlines and published standardized test results have failed to acknowledge: a crucial link exists among assessment, curriculum, and teaching. High-stakes, test-driven assessment inhibits collaboration among educators, hinders student engagement, and undermines critical thinking.
In a Consortium school, students engage in extensive reading, writing, analysis, and discussion in all classrooms, which is work that builds toward the graduation or performance-based assessment tasks (PBATs) required of every Consortium student:
PBATs incorporate commonly accepted learning standards, enjoining students to write well, read analytically, punctuate properly, solve geometry problems, and be mathematically literate, but they also require students to do work that challenges and engages their thinking. For instance, such work includes researching and writing substantive essays that analyze different viewpoints; formulating, conducting, and analyzing the findings of their own science experiments; applying mathematical concepts to concrete problems; and interviewing adults who have subject-matter expertise. Consequently, the assignments, discussions, debates, experiments, and research projects that one sees in Consortium schools align with and often exceed college-level expectations and norms.
Consortium schools also differ from traditional public schools in the diversity of course offerings, which also helps to keep students engaged. For example, in one Consortium school last year, social studies offerings included (but were not limited to) semester-long classes with titles such as Constitutional Law, the Civil War and Reconstruction, Popular Culture in the 1920s and the Present, Political Philosophy, Ethics, Biographies, the History of Black Cinema, Economic Policy and the American Dream, Modern Chinese History, India: Colonialism and Independence, the History and Politics of Disney Films, Puerto Rican History, Slave Revolutions, and Comparative Religion. In each of these courses, a wide variety of sources and teacher- and student-derived questions were explored. As is standard in Consortium schools, students are allowed to choose, with teacher input, which courses and performance assessments most interest them and suit them best.
Homework assignments complement course and assessment choices and build skills required to complete performance-based assessments. The homework requires students to support their opinions and interpretations with evidence and organize their thoughts coherently.
Teachers inform students when the work they have done on a particular assignment is strong enough to merit the research and revision process involved in producing a PBAT. To begin a PBAT, a student engages in a period of intensive work, which culminates in an oral presentation of a paper to a committee of outside examiners who discuss both the paper and related topics with the student.
Consortium students include a larger percentage of minority and low-income students than the overall New York City public school population. Although they begin school with lower academic achievement, they graduate from Consortium schools and attend college at higher percentages. For example, the graduation rate of black students from Consortium schools is 74.7 percent, compared with 63.8 percent for all New York City public schools. For Latino students, the graduation rate from Consortium schools is also higher than the rate from all New York City public schools: 71.2 percent compared with 61.4 percent.11
As the Consortium schools have shown, performance assessment is a clear and superior alternative to standardized testing because it enables teachers to make effective, productive judgments about what their students know and can do.
c80f0f1006