A friend of mine recently tried a standards-based grading (SBG) approach for her Calculus II course. (You can read about Kate's experience on her blog.) I find this approach to evaluation very interesting and, after seeing Kate's success, I am eager to learn more.
There are plenty of resources for SBG at pre-college levels, but it seems that college level resources are somewhat scarce and hard to sort from the giant haystack of pre-college resources. I'm hoping we can build a list here or that someone can point me to a list compiled elsewhere. I am especially interested in seeing other accounts and examples of using SBG in college math courses, especially for calculus courses.
SBG is based on a fine division of the course objectives into concepts, tasks, skills or other learning objectives, henceforth called standards for simplicity. Students are evaluated for each standard on a coarse scale, typically 0-4, indicating their level of mastery of the standard in question. Final course grades are assigned according to what portion of the standards that each student has mastered at an appropriate level.
Implementation details vary. In the course that my friend gave, students were regularly assessed and re-assessed for each standard using tests and quizzes to measure their current level of mastery as the course advanced. Students were also given some opportunities to ask for specific standards to be re-tested. In cases where a standard occurred on the final exam, the average of the latest course score and final exam score was used to compute the final score for that standard. For final course evaluations, letter grades A,B,C corresponded roughly to earning 4,3,2, respectively, on at least 85% of standards.
I've done SBG (aka "criterion based assessment") in an undergraduate mathematics course. I heartily recommend it. I've been trying to write up my experience with the course that I tried it in. Here's the part on SBG (warning: this is in draft form):
I have never been a fan of numerical assessment in exams. (This might have something to do with the fact that my undergraduate university - Oxford - used no less than four different schemes for devising a final grade).
My reason for this is simple. Learning is not a continuous process; whilst it often proceeds in small increments there are definite leaps from one level to the next. It is these leaps that we should be focussing on measuring: has a student achieved the next level, or are they still at the previous one? Within those levels, there is not much to separate students.Grading a paper purely numerically paints a false picture. It says that learning is continuous, and that the student who got 68 points is better than the student who got 65 points. We then draw graphs showing a nice normal distribution and select cut-off points for the different grades.
So we take something discrete, namely the amount the student has learnt, force it into something continuous, their grade, and finally make it discrete again. As any topologist could tell, discretising a continuous function is a Bad Idea. Suppose we decide on 70 as a grade boundary. What about the student who got 69? Are they really qualitatively worse than the one who got 70? It was only one point! But by that argument, all students should get the same grade since each possible total of points differs by only one point from the next one and therefore should get the same grade as it.
Most likely, the grade boundaries are not as solid as that. We look carefully at students on the boundaries (well, we ought to; whether we do or not is another matter). But this is actually worse. Not only are we acknowledging that the original system is incorrect, we are fixing it in an ad hoc way that is completely opaque to the students.
The exam is viewed in a different light. Instead of a slow accumulation of points, each unit is graded according to the rule "Is this evidence of a particular skill?". At the end, the question becomes "Has this student shown sufficient evidence to justify this grade?".
When grading, my imaginary scenario is the following. I imagine that I award a student a certain grade. Then I imagine that sometime after the lecturer for another course has marched into my office demanding to know why I awarded that grade (since clearly the student is as thick as two short planks). The exam is then my evidence for that grade. If I cannot use the exam to defend the grade, the student does not get it.
(This has a useful side effect. It so happens that I sometimes get asked to write references for students who have taken this course. However, it is a medium to large course - about 100 students - so there are very few that I can say too much about. But as my grades are knit to specific achievements, I can list those achievements and say that the student has demonstrated that ability.)
These three levels corresponded to grades E, C, and A. Not every part of every question could provide evidence for all three levels. In particular, one question simply asked for specific statements and so could only provide evidence for an E grade.
The intermediate grades (D and B) were very definitely low versions of the next grade up. Thus the qualitative leap was between E and D, and C and B. The distinction between D and C and between B and A was quantitative. So a C-grade student was very definitely C-grade.
In addition to the above, let me link to the scheme of work for the course where you can see what the specific learning outcomes were (together with how they translate in to grades). Exploring that site will tell you more about the course if that's useful to you. There are also links to exam papers (of both types).
I think it is also worth pointing out a couple of additional advantages. The exam was a doddle to write (comparatively), and the number of queries/complaints that I got about grades in that year was far fewer than any other year (with the same course). I have no proof of causation, but it's a very strong correlation!
so use them! I have the great benefit of being married to a(n excellent) teacher. Every time I hear about some innovation in university level education, I get the same line from her: "We've been doing that for years, it's called XYZ.". Then I get on my high horse and say that of course it's different in university because of ABC. Finally, when I actually try it then I realise that she was right all along and at heart, it really is the same.
I'm having a very simple program that outputs simple JSON string that I manually concatenate together and output through the std::cout stream (the output really is that simple) but I have strings that could contain double-quotes, curly-braces and other characters that could break the JSON string. So I need a library (or a function more accurately) to escape strings accordingly to the JSON standard, as lightweight as possible, nothing more, nothing less.
I found a few libraries that are used to encode whole objects into JSON but having in mind my program is 900 line cpp file, I rather want to not rely on a library that is few times bigger then my program just to achieve something as simple as this.
You can either call their escape_string() method directly (Note that this is a bit tricky, see comment below by Lukas Salich), or you can take their implementation of escape_string() as a starting point for your own implementation:
You didn't say exactly where those strings you're cobbling together are coming from, originally, so this may not be of any use. But if they all happen to live in the code, as @isnullxbh mentioned in this comment to an answer on a different question, another option is to leverage a lovely C++11 feature: Raw string literals.
I won't quote cppreference's long-winded, standards-based explanation, you can read it yourself there. Basically, though, R-strings bring to C++ the same sort of programmer-delimited literals, with absolutely no restrictions on content, that you get from here-docs in the shell, and which languages like Perl use so effectively. (Prefixed quoting using curly braces may be Perl's single greatest invention:)
In C++11 or later, a raw string literal is prefixed with a capital R before the double quotes, and inside the quotes the string is preceded by a free-form delimiter (one or multiple characters) followed by an opening paren.
From there on, you can safely write literally anything other than a closing paren followed by your chosen delimiter. That sequence (followed by a closing double quote) terminates the raw literal, and then you have a std::string that you can confidently trust will remain unmolested by any parsing or string processing.
I've been watching the testing debate with interest for years and have seen both sides of the problem. Sadly, many people coming out of high school simply do not have the basic skills to think or communicate effectively, making them unable to compete in the business world. Yes, it is critical that the schools be improved and, in some way, demonstrate a continuing improvement.
However, the current rage for programmed testing may prove to have other serious problems attached. We can already see that education has changed drastically so that teachers are "teaching to the test" rather than fostering growth and a lasting interest in learning. I deeply fear that this may well lead to a new generation that has learned how to take tests, but has neither internalized the information in any context other than that of a final test (and therefore will not retain that knowledge), nor learned how to learn for the sake of problem solving, rather than test taking.
Science and engineering problems in the real world don't come on a multiple choice test -- you have to know how to set up the problem and then solve the problem. The purpose of History and Social Studies is to make thinking, knowledgeable citizens able to make intelligent decisions that will shape the future of this country. This requires understanding complex, multi-faceted issues that cannot be distilled into simple true-false or multiple-choice questions.
4a15465005