Yishay, you have pointed out two starkly divergent views on
evaluation that I have observed firsthand throughout my career. Obviously, my
hand is up for the "Evaluation as life force of learning design"
perspective. But I certainly understand why there are many people who lean
toward the "evaluation is like cod liver oil" side of the continuum.
For me, it all goes back to decision making. We make decisions based on
different bases depending on the situation. Sometimes intuition may be a great
foundation for making a decision, especially in a creative activity like design.
In the real world of education, decisions are often made on the basis of habit
(we have always done it that way), politics (the IT administration has bought X
course management system and we have to use it), personal preferences (we
should go with iPads instead of another tablet because I like Apple products),
etc. I believe and my experience tells me that evaluation activities that yield
reliable and valid information provide a better foundation for decision making
in most cases than habit, politics, personal preferences, and even intuition.
That said, evaluation does take more time and effort away from the design and development process than many people are willing to invest. And many times, evaluations are conducted simply to meet the requirements of a funding agency. In other instances, evaluation information is collected, but by the time the results are in, the decision point has already been passed.
Another problem is that evaluation results are sometimes completely ignored or even misused. My very first formal evaluation was in 1974-74, when I was a graduate student at Syracuse University. The hot technology at that time was the handheld four-function calculator costing over $100 each. A very well known company gave a professor at Syracuse a contract to evaluate an elementary school math curriculum designed around calculators, and he hired me and several other graduate students to conduct the evaluation. We evaluated the curriculum in several schools around Syracuse and found that it was just awful for many reasons. It had clearly been designed in a rush with the primary goal of selling calculators rather than providing effective math education for children. We sent the company our evaluation report, confident that they would make significant changes in the design of the materials or perhaps start over with a whole new design. To our chagrin, instead of revising the curriculum, the company began to market the materials with full color advertisements in magazine for teachers and administrators. The ads included a statement to the effect that “These materials have been evaluated by education experts at Syracuse University,” implying that Syracuse had endorsed the program and totally omitting the findings that indicated that the materials were ineffective. No wonder some people are skeptical about evaluation.
Returning to your call for a show of hands, Yishay, I look forward to seeing other perspectives on this issue.
--
You received this message because you are subscribed to the Google Groups "OLDS MOOC Open discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to olds-mooc-ope...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Clearly it is Important to check whether the designed outcomes are being achieved but all to often more time is taken evaluating rather than innovating. All new enterprises require both a measure of courage and tenacity, balanced with the ability to quickly adapt. The trick is to know which approach to take. Often it is obvious how the course design is fitting the needs of the students and performing against the desired learning outcomes, but not always. So evaluation is needed. Although It can be very tempting to try to capture a wide range of data rather than a focused investigation of keys areas of interest and innovation. Of course any evaluation is also dependant on your points of view, stakeholders will have different priorities. Stakeholder groups cannot be assumed to be homogenous either, not everyone will have the same needs, expectations and motivation, making evaluation difficult. Perhaps a balance of creative intuition and focused evaluation.
--
You received this message because you are subscribed to the Google Groups "OLDS MOOC Open discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to olds-mooc-ope...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Diana, you touch on an issue that has long troubled me….the veiled threat nature that is often insinuated by the word, evaluation. No one wants to be “evaluated,” which is why I try to use the term “assessment” when referring to any kind of evaluative activity focused on the characteristics of people (their aptitudes, abilities, achievement, attitudes, etc.) and reserve the use of the term “evaluation” when referring to evaluative activities focused on things (program, products, projects, and places). I like the sense of “developmental testing” as you describe it. This weekend, I listened to an excellent presentation by George Siemens titled “Toward New Models of Coherence: Responding to the Fragmentation of Higher Education” that he gave at the University of Victoria on February 7, 2013.
http://www.elearnspace.org/blog/
Toward the end of the session and in response to a question,
George contrasted the experience of trying to get a course developed and
approved within the traditional academic system (which can take two or more
years) with the rapid development system utilized by Udacity, which he claimed
provides formative peer review for new modules that faculty develop within 24
hours and full course development within a month, all driven by rigorous “developmental
testing” (although he used the term evaluation). George is by no means a
proponent of the xMOOCs emerging from Udacity, Coursera, or EdX, but he does
recognize that they are doing some things right, especially the use of formative
evaluation (or better developmental testing). Of course, you and your colleagues at the OU and the LKL have done this type of developmental testing quite well for much longer.
sent from my Newton 800
It's quite interesting that "evaluation" is such a dirty word, at least to some people. I personally don't see it as problematic- then again I am only associating a recurring analysis and course correction to it, as opposed to something that could be considered punitive.
I wondering if one of the contributing factors that I have this view of "evaluation" is that I came into this when agile software development and agile processes were first all the rage; thus I was not exposed to a monolithic make or break reality (if it ever existed) :-)
I see quite a few people using assessment and evaluation interchangeably; however my understanding is that assessment is (usually) reserved for an assessment of learners and how well they can demonstrate mastery of learning outcomes; while evaluation is the evaluation of the learning intervention.
sent from my Newton 800
Bob, I just read your Blog about Week 7 at: http://thedigitalday.wordpress.com/2013/02/26/oldsmooc-w7/ You have provided a rich and realistic discussion of the issues around formative evaluation in the context of supporting academic staff engaged in online teaching. The analogy of being an aircraft mechanic for instructors flying by the seat of their pants is spot on.
In 1987, John Sculley, then CEO of Apple Computer, showed Apple’s vision of the future with a five-minute video called the “Knowledge Navigator.” The video features an academic getting ready to teach a class (by lecturing) in 2011 with the assistance of a bow-tie wearing intelligent agent.
http://archive.org/details/DigibarnKnowledgeNavigatorAppleConcept1987
Thank goodness, the Apple’s vision of an AI agent has not come into reality, Bob, or you might be out of a job. Instead we have Siri on our iPhones and iPads but it is not very functional. I just asked Siri to “Show me the Knowledge Navigator video,” and Siri replied “I can’t search for videos.” I then asked Siri, “Do you have any good ideas for teaching my class tonight?,” and Siri replied with a one word response, “Flatterer.” Hmmmm.
In your Blog, you wrote that when you review an online course for one of your clients, you consider factors such as:
These are all important characteristics. Some of these are obviously focused on technical issues and some on pedagogical (learning design) issues. I have found that “alignment” among the pedagogical dimensions is one of the weakest elements in many online and blended courses. There are at least seven critical components of any learning environment (online, blended, or traditional) that must be carefully aligned: 1) objectives, 2) content, 3) pedagogical strategies, 4) learner tasks, 5) teacher roles, 6) technology roles, and 7) assessment. Misalignment of these components is obvious in many higher education courses and programs. For example, although most higher education institutions present mission statements that emphasize the importance of graduates developing strong skills in critical thinking, creativity, and writing, too few courses actually address these types of higher order outcomes and/or fail to assess them. Although objectives and assessment activities are often the most frequently misaligned components of courses and programs, each of the seven components must be carefully considered when examining the overall alignment of a course. (My wife, Trisha, and I have described the issue of alignment in more detail in: Reeves, T. C., & Reeves, P. M. (2012). Designing online and blended learning. In L. Hunt & D. Chalmers (Eds.), University teaching in focus: A learning-centred approach (pp. 112-127). Camberwell, VIC, Australia: ACER Press.)
In your blog, you also mentioned looking at Quality Assurance systems for online learning. I did an analysis of these systems for a group of Australian universities in late 2011, and I found that the Quality Matters peer review approach is excellent: http://www.qmprogram.org/ An excellent book about this topic is: Jung, I., & Latchem, C. (2011). Quality assurance and accreditation in distance education and e-learning: Models, policies and research. New York and London: Routledge.
Thank you again for your thoughtful contributions to our discussion of formative evaluation this week.
--
You received this message because you are subscribed to the Google Groups "OLDS MOOC Open discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to olds-mooc-ope...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I've just received the daily newsletter of Stephen Downes and he mentions a recent article published in the European Journal of Education «Changing Assessment» - http://onlinelibrary.wiley.com/doi/10.1111/ejed.12018/pdf - which focus on the issue of wishing to change curricula and learning objetives and keeping the assessment based on testing.
This focus on knowledge and facts examinations anf formal testing really disturbs me. So much effort is made for the sake of exams, learners training, logistics that costs a lot of money (surveillance, police involved in distribution of national exams, etc) and a few weeks later all this factual knowledge is erased from the students' brains. If I were subjected to my 9th grade exam on math or chemistry, today, I would certainly fail.
I agree with the viewpoint of the article stating that assessment strategies should focus on more «holistic key competencies and 21st century skills... assessment must transcend the Testing Paradigm».
In Youtube there's a set of dialogues between Mac PrensKy and Stephen Heppell on education, and this one on assessment - http://www.youtube.com/watch?v=6tmD1caEFWQ&feature=share&list=PLRyiLbvPK9FyfaeQnwRfxW-6HGmi_UhnT
Regarding Learning Design and online courses evaluation,
the same viewpoint applies, new key competences are required and
formative evaluation has an important role to improve learners learning,
based on project development, problem-solving and good qualitiy
products resulting from a learning process, ePortfolios should be
promoted and be the evidence of that learning.