W7: Evaluation - the heart of learning design or a tedious chore?

153 views
Skip to first unread message

Yishay Mor

unread,
Feb 22, 2013, 7:30:49 PM2/22/13
to olds-mo...@googlegroups.com
quick show of hands: where do you stand on the issue of evaluation?

Are you closer to:
a. Evaluation is the life blood of learning design. From initial concepts to final product - we evaluate and adjust at every step. A design not evaluated is not worth the bits it consumes.

or to:
b. Evaluation is a dreary tax we need to pay our funders / institutions / supervisors. Design is a creative process, it relies on imagination, inspiration and intuition. Good design has the "quality without a name", it defies measurement. 

explain...

Tom and Yishay

Cara Saul

unread,
Feb 23, 2013, 11:40:23 AM2/23/13
to olds-mo...@googlegroups.com
Clearly it is Important to check whether the designed outcomes are being achieved but all to often more time is taken evaluating rather than innovating. All new enterprises require both a measure of courage and tenacity, balanced with the ability to quickly adapt. The trick is to know which approach to take. Often it is obvious how the course design is fitting the needs of the students and performing against the desired learning outcomes, but not always. So evaluation is needed. Although It can be very tempting to try to capture a wide range of data rather than a focused investigation of keys areas of interest and innovation. Of course any evaluation is also dependant on your points of view, stakeholders will have different priorities. Stakeholder groups cannot be assumed to be homogenous either, not everyone will have the same needs, expectations and motivation, making evaluation difficult. Perhaps a balance of creative intuition and focused evaluation.

Thomas C. Reeves

unread,
Feb 23, 2013, 11:49:37 AM2/23/13
to olds-mo...@googlegroups.com

Yishay, you have pointed out two starkly divergent views on evaluation that I have observed firsthand throughout my career. Obviously, my hand is up for the "Evaluation as life force of learning design" perspective. But I certainly understand why there are many people who lean toward the "evaluation is like cod liver oil" side of the continuum.

For me, it all goes back to decision making. We make decisions based on different bases depending on the situation. Sometimes intuition may be a great foundation for making a decision, especially in a creative activity like design. In the real world of education, decisions are often made on the basis of habit (we have always done it that way), politics (the IT administration has bought X course management system and we have to use it), personal preferences (we should go with iPads instead of another tablet because I like Apple products), etc. I believe and my experience tells me that evaluation activities that yield reliable and valid information provide a better foundation for decision making in most cases than habit, politics, personal preferences, and even intuition.

 

That said, evaluation does take more time and effort away from the design and development process than many people are willing to invest. And many times, evaluations are conducted simply to meet the requirements of a funding agency. In other instances, evaluation information is collected, but by the time the results are in, the decision point has already been passed.

 

Another problem is that evaluation results are sometimes completely ignored or even misused. My very first formal evaluation was in 1974-74, when I was a graduate student at Syracuse University. The hot technology at that time was the handheld four-function calculator costing over $100 each. A very well known company gave a professor at Syracuse a contract to evaluate an elementary school math curriculum designed around calculators, and he hired me and several other graduate students to conduct the evaluation. We evaluated the curriculum in several schools around Syracuse and found that it was just awful for many reasons. It had clearly been designed in a rush with the primary goal of selling calculators rather than providing effective math education for children. We sent the company our evaluation report, confident that they would make significant changes in the design of the materials or perhaps start over with a whole new design.  To our chagrin, instead of revising the curriculum, the company began to market the materials with full color advertisements in magazine for teachers and administrators. The ads included a statement to the effect that “These materials have been evaluated by education experts at Syracuse University,” implying that Syracuse had endorsed the program and totally omitting the findings that indicated that the materials were ineffective. No wonder some people are skeptical about evaluation.   

 

Returning to your call for a show of hands, Yishay, I look forward to seeing other perspectives on this issue.


- Tom Reeves

Apostolos Koutropoulos

unread,
Feb 23, 2013, 1:16:08 PM2/23/13
to Yishay Mor, olds-mo...@googlegroups.com
I think that there is a happy medium between the two. I do believe that evaluation and iteration (based on evaluation findings) are at the core of good learning design, but, by the same token, I do believe that there are people that can take it to the extreme.  This then, can become, that tax (or smelly cod oil as someone else put it) in the learning design.

I think that if there isn't adequate evaluation (formative and summative) then a lot of learning design efforts can go to waste.  We are not perfect beings (no matter how much we may think we are ;-)  ), so we will not be getting something right on the first try. Evaluation is just as important an aspect to ID and LD as other phases :)


2013/2/22 Yishay Mor <yis...@gmail.com>

--
You received this message because you are subscribed to the Google Groups "OLDS MOOC Open discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to olds-mooc-ope...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Club Admiralty
Copyright 1996-2012
http://www.club-admiralty.com

Apostolos Koutropoulos

unread,
Feb 23, 2013, 1:19:12 PM2/23/13
to Cara Saul, olds-mo...@googlegroups.com
I found your phrasing quite interesting: "but all to often more time is taken evaluating rather than innovating"

It sounds like analysis paralysis to me, something that can happen, but I don't see analysis and innovation as two opposites.  I think a good evaluation can lead to big bangs in innovation.  I think this is where (potentially) a team design approach to LD  can be helpful, if some members get stuck in perpetual evaluation, others can assist in getting un-stuck and move the project forward.

2013/2/23 Cara Saul <cara...@btinternet.com>
Clearly it is Important to check whether the designed outcomes are being achieved but all to often more time is taken evaluating rather than innovating. All new enterprises require both a measure of courage and tenacity, balanced with the ability to quickly adapt. The trick is to know which approach to take. Often it is obvious how the course design is fitting the needs of the students and performing against the desired learning outcomes, but not always. So evaluation is needed. Although It can be very tempting to try to capture a wide range of data rather than a focused investigation of keys areas of interest and innovation. Of course any evaluation is also dependant on your points of view, stakeholders will have different priorities. Stakeholder groups cannot be assumed to be homogenous either, not everyone will have the same needs, expectations and motivation, making evaluation difficult. Perhaps a balance of creative intuition and focused evaluation.
--
You received this message because you are subscribed to the Google Groups "OLDS MOOC Open discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to olds-mooc-ope...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Ida Brandão

unread,
Feb 23, 2013, 3:07:47 PM2/23/13
to olds-mo...@googlegroups.com
 I think evaluation is one essencial step of learning design. The fast changes that occur with technology and its use requires us to revise and readapt. As we are in the process of developing something we are permanently evaluating and revising. The more viewpoints we ay introduce in the process the better result or product we may get.
And at the end of the process we have better test it with the users, otherwise we may get some surprises. The ultimate users are the best validators. 

Cara Saul

unread,
Feb 23, 2013, 8:40:06 PM2/23/13
to olds-mo...@googlegroups.com, Cara Saul
"analysis paralysis" made me smile :) No don't see evaluation and innovation as opposites. Agree good team composition and dynamics can keep a project moving forward. However it is worth deciding which information is needed, the time and effort you wish/can afford to expend on evaluation. Like any other part of a project it needs planning and design.   

Diana Laurillard

unread,
Feb 24, 2013, 8:00:48 AM2/24/13
to olds-mo...@googlegroups.com
Good question. Of course I vote for (a).

But I don't really see it as 'evaluation' when we're in the rapid design-test-redesign phase. I prefer the phrase we used to use that the OU - developmental testing. It's progressive and moves you forward, you find out what you need to know and it may be back to the drawing board, or just a bit of tweaking, but you find out a lot, and that's exciting.

It must be rapid and iterative - just observe 5 users working at your precious design and it's enough to see what you really need to do. So rapid prototyping is essential.

'Evaluation' does not have that same energy as a word; it sounds too static, too final: "So is it good or bad? - actually it's pretty rubbish. Damn. End of story". With 'developmental testing' it's: "So what do we need to make it work better? - Ah that's what we need, ok, lets do it". Different scenario entirely.

We do use 'formative evaluation' to mean the same thing, but even that has a sense of judgment, when what better characterises 'design-test'-redesign' is a sense of exploration.

Diana

Thomas C. Reeves

unread,
Feb 24, 2013, 11:16:37 AM2/24/13
to olds-mo...@googlegroups.com

Diana, you touch on an issue that has long troubled me….the veiled threat nature that is often insinuated by the word, evaluation. No one wants to be “evaluated,” which is why I try to use the term “assessment” when referring to any kind of evaluative activity focused on the characteristics of people (their aptitudes, abilities, achievement, attitudes, etc.) and reserve the use of the term “evaluation” when referring to evaluative activities focused on things (program, products, projects, and places). I like the sense of “developmental testing” as you describe it. This weekend, I listened to an excellent presentation by George Siemens titled “Toward New Models of Coherence: Responding to the Fragmentation of Higher Education” that he gave at the University of Victoria on February 7, 2013.

http://www.elearnspace.org/blog/

Toward the end of the session and in response to a question, George contrasted the experience of trying to get a course developed and approved within the traditional academic system (which can take two or more years) with the rapid development system utilized by Udacity, which he claimed provides formative peer review for new modules that faculty develop within 24 hours and full course development within a month, all driven by rigorous “developmental testing” (although he used the term evaluation). George is by no means a proponent of the xMOOCs emerging from Udacity, Coursera, or EdX, but he does recognize that they are doing some things right, especially the use of formative evaluation (or better developmental testing). Of course, you and your colleagues at the OU and the LKL have done this type of developmental testing quite well for much longer.

- Tom Reeves

Joshua Underwood

unread,
Feb 25, 2013, 4:30:41 AM2/25/13
to olds-mo...@googlegroups.com
I'd lean towards a) but somehow I associate the word 'evaluate' too much with a static summative process, despite knowing that that is not what is intended by formative evaluation. I'd like to see it as more dynamic and part of a creative process, driving forward innovation and inspiration, as well as participation in design.

I'm particularly interested in the agility and authenticity of formative evaluation/developmental testing. I wonder whether new tools for rapid prototyping enable more dynamic and authentic evaluative processes within development? A vision in which funtional prototypes can be deployed in close to realistic conditions with tightly integrated feedback mechanisms and log data analysis, used by people with authentic motivations, and adjustments can be made and new ideas tried out more or less 'on the fly'. Perpetual beta but with more open channels between designers, users and developers and probably somewhat like 'design in the wild' but with rapider cycles of feedback and re-development.

Briar Jamieson

unread,
Feb 25, 2013, 4:52:01 AM2/25/13
to olds-mo...@googlegroups.com
a. I believe evaluation is important to understanding if our intended goals are being met. However, the questions that Yishay has set up seems to make evaluation sound, tired :) And I would disagree because I feel that structuring a good evaluation is creative and can be inspiring to design.  I love feeling around for what tools, questions, approaches, I can use with different roles (learners, teachers, administrators) to best capture their reactions, feedback, input.  Have I asked all of the right questions? Am I neglecting an area in my questionnaire, does the question I am asking pre-dispose someone to an answer I want to hear. Have I dug deep enough to go beyond, "I hate it" or "I love it" - WHY? what worked? what didn't? etc.

As I am reading more about 'evaluation' this week, I am learning more and more that at various stages I am using evaluation processes.  In my world we don't really refer to our processes as 'evaluation'. We use expressions like: "can I get your feedback", "would you mind reviewing", "Your input would help me out", "we are testing the application", "a few tweaks", "who's doing the quality assurance" "Let's do a survey". Perhaps as Professor Laurillard suggests different lexical choices has implied pragmatic value or contextual implications. I appreciate the 'rapid' testing to capture that urgency and immediacy. 

Evaluation sounds very formal and does conjure up funding requirements for - 'indicators' 'measures' and 'impact' etc, that to me implies a judgement being placed on the outcomes.  In measurements, 10 views, is 10 views (say for instance on cloudworks, or youtube, or a blog post)? Personally, there is no judgement, that is what it is. I find that with online with some 'readers' of indicators, there is a value placed on 10 views and it is either "oh that is wonderful" or "horrible what are you going to do about it". Obviously we need to go deeper to understanding this indicator: Can we see a pattern, what were the views last month? who was viewing it? why did it change, etc. Do we want this value to be different? Why? What can we do to increase it?  Will it bring more meaning and value? How do we know 100 views is better than 10 views? etc.

So much to consider.

Briar



Apostolos Koutropoulos

unread,
Feb 25, 2013, 9:08:22 AM2/25/13
to Briar Jamieson, olds-mo...@googlegroups.com

sent from my Newton 800

Στις 25 Φεβ 2013 8:50 π.μ., ο χρήστης a.koutr...@gmail.com έγραψε:

It's quite interesting that "evaluation" is such a dirty word, at least to some people. I personally don't see it as problematic- then again I am only associating a recurring analysis and course correction to it, as opposed to something that could be considered punitive.

I  wondering if one of the contributing factors that I have this view of "evaluation" is that I came into this when agile software development and agile processes were first all the rage; thus I was not exposed to a monolithic make or break reality (if it ever existed) :-)

I see quite a few people using assessment and evaluation interchangeably; however my understanding is that assessment is (usually) reserved for an assessment of learners and how well they can demonstrate mastery of learning outcomes; while evaluation is the evaluation of the learning intervention.

sent from my Newton 800

Jane Nkosi

unread,
Feb 26, 2013, 6:41:42 AM2/26/13
to olds-mo...@googlegroups.com

Jane Nkosi

unread,
Feb 26, 2013, 6:42:08 AM2/26/13
to olds-mo...@googlegroups.com, Briar Jamieson
Hi all,
I think both words "assessment"and "evaluation"have some judgemental connotation. Whether you assess students or evaluate a product some judgement must be made. In my view it is important to understand if what you are developing has value, so it may be necessary to have formative evaluation to correct errors if any occur. Having said that though it may be necessary to use friendlier vocabulary as Briar suggests.

Jane

Apostolos Koutropoulos

unread,
Feb 26, 2013, 9:38:08 AM2/26/13
to olds-mo...@googlegroups.com
Of course I expect some sort of objective judgement in both assessment and evaluation situations :-)  You can offer constructive critique and objectively evaluate someone without being judgmental, negative, and detrimental to someone's (or something's) development.  While language matters, and this is my own personal opinion, I think it's important to reclaim the positive aspects in terms such as assessment and evaluation, rather than throw them out wholesale and replace them.  If we don't get to the underlying issue, any new terms we select will be soiled by whatever is the underlying ailment :)



2013/2/26 Jane Nkosi <janen...@gmail.com>

Bob Ridge-Stearn

unread,
Feb 26, 2013, 10:25:21 AM2/26/13
to olds-mo...@googlegroups.com
Well, I feel I've just answered this question on my blog. without having seen it .

It seems to me that learning designers (those who see themselves as 'designers') are much more inclined to value formative evaluation of their learning designs than academics who see themselves as researchers and teachers and think in terms of courses rather than learning designs.  They probably do see it as a dreary tax.

Me?  I'm a manager of an 'e-learning department' (my apprenticeship involved a route through teaching and instructional design) and I sometimes feel like an aircraft mechanic.  I don't see many aircraft designers to be honest. I see a lot of pilots - academics who jump into the cockpit each semester and fly by the seats of their pants.  When they land they sometimes ask me to fix things or give their course a quick once over. Occasionally they make an emergency landing and need something fixed. But asking them to do serious, systematic formative evaluations wouldn't be appreciated unless they were interested in SoTL and could get a research paper out of the exercise.

However, it's something that I really want to do so am looking for practical, painless ways of evaluating our online courses.

This OLDS MOOC course is quite interesting in that it is about it's own subject matter and the tutors on it - like you yourself, Yishay - are passionately interested in designing and delivering it because that is your academic discipline. That explains why you all spend so much time interacting with us through so many digital channels. I work with academics in fields unrelated to 'e-learning' who are passionate about their discipline and care deeply for their students, but who do not want to engage with the technology to the same extent.  When we design for the 21st century we have to remember that not everyone will want to do what we can do and not everyone will want to see teaching as a design process.

Best wishes,
Bob R-S

Thomas C. Reeves

unread,
Feb 26, 2013, 1:44:57 PM2/26/13
to olds-mo...@googlegroups.com

Bob, I just read your Blog about Week 7 at: http://thedigitalday.wordpress.com/2013/02/26/oldsmooc-w7/  You have provided a rich and realistic discussion of the issues around formative evaluation in the context of supporting academic staff engaged in online teaching. The analogy of being an aircraft mechanic for instructors flying by the seat of their pants is spot on.

 

In 1987, John Sculley, then CEO of Apple Computer, showed Apple’s vision of the future with a five-minute video called the “Knowledge Navigator.” The video features an academic getting ready to teach a class (by lecturing) in 2011 with the assistance of a bow-tie wearing intelligent agent.

http://archive.org/details/DigibarnKnowledgeNavigatorAppleConcept1987

 

Thank goodness, the Apple’s vision of an AI agent has not come into reality, Bob, or you might be out of a job. Instead we have Siri on our iPhones and iPads but it is not very functional. I just asked Siri to “Show me the Knowledge Navigator video,” and Siri replied “I can’t search for videos.” I then asked Siri, “Do you have any good ideas for teaching my class tonight?,” and Siri replied with a one word response, “Flatterer.”  Hmmmm.

 

In your Blog, you wrote that when you review an online course for one of your clients, you consider factors such as:

  • usability,
  • accessibility, 
  • navigation,
  • how it displays on different devices,
  • communication (tutor to student, and student to student),
  • copyright,
  • pacing,
  • time on task,
  • learning outcomes and
  • assessment.

These are all important characteristics. Some of these are obviously focused on technical issues and some on pedagogical (learning design) issues. I have found that “alignment” among the pedagogical dimensions is one of the weakest elements in many online and blended courses. There are at least seven critical components of any learning environment (online, blended, or traditional) that must be carefully aligned: 1) objectives, 2) content, 3) pedagogical strategies, 4) learner tasks, 5) teacher roles, 6) technology roles, and 7) assessment. Misalignment of these components is obvious in many higher education courses and programs. For example, although most higher education institutions present mission statements that emphasize the importance of graduates developing strong skills in critical thinking, creativity, and writing, too few courses actually address these types of higher order outcomes and/or fail to assess them. Although objectives and assessment activities are often the most frequently misaligned components of courses and programs, each of the seven components must be carefully considered when examining the overall alignment of a course. (My wife, Trisha, and I have described the issue of alignment in more detail in:  Reeves, T. C., & Reeves, P. M. (2012). Designing online and blended learning. In L. Hunt & D. Chalmers (Eds.), University teaching in focus: A learning-centred approach (pp. 112-127). Camberwell, VIC, Australia: ACER Press.)

 

In your blog, you also mentioned looking at Quality Assurance systems for online learning. I did an analysis of these systems for a group of Australian universities in late 2011, and I found that the Quality Matters peer review approach is excellent: http://www.qmprogram.org/ An excellent book about this topic is: Jung, I., & Latchem, C. (2011). Quality assurance and accreditation in distance education and e-learning: Models, policies and research. New York and London: Routledge.  

 

Thank you again for your thoughtful contributions to our discussion of formative evaluation this week.

Yishay Mor

unread,
Feb 28, 2013, 6:17:32 AM2/28/13
to olds-mo...@googlegroups.com
Hi Bob,

My answer is the same as the one I used to give when I was training / managing software engineers:
                              an hour spent on design saves 10 hours in implementation.
I think the same should apply to evaluation, or rather:
                              an hour spent on evaluation saves 10 hours in damage control.

But I guess for that to really be true, we need a culture of LD and evaluation, and the tools to support it. 

Yishay

Apostolos Koutropoulos

unread,
Feb 28, 2013, 12:02:08 PM2/28/13
to olds-mo...@googlegroups.com
I love this " an hour spent on evaluation saves 10 hours in damage control."!

2013/2/28 Yishay Mor <yis...@gmail.com>
--
You received this message because you are subscribed to the Google Groups "OLDS MOOC Open discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to olds-mooc-ope...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Club Admiralty
Copyright 1996-2013
http://www.club-admiralty.com

Niall Watts

unread,
Feb 28, 2013, 12:26:51 PM2/28/13
to olds-mo...@googlegroups.com
Briar

I like your 'friendly' choice of words. It sounds helpful but not judgemental. I often ask colleagues for comments, review etc of learning materials, websites but not to evaluate them or assess them

Niall



Niall Watts

unread,
Feb 28, 2013, 12:33:39 PM2/28/13
to olds-mo...@googlegroups.com

It's a matter of organisational culture.  In higher education, there may not be much (perceived) damage to be controlled. A design can be good enough for the lecturer while the educational technologist sees how it could be better

Kelly Edmonds

unread,
Feb 28, 2013, 6:58:21 PM2/28/13
to olds-mo...@googlegroups.com
Oh Tom, that is just awful. I can't they were that unethical! I guess evaluative data is something people don't want to do as they are afraid of the outcomes. But it's best to know the truth (somewhat) and make creative decisions. I see how important it is to evaluate early on in a design.

 


Ida Brandão

unread,
Mar 1, 2013, 11:50:16 AM3/1/13
to olds-mo...@googlegroups.com

I've just received the daily newsletter of Stephen Downes and he mentions a recent article published in the European Journal of Education «Changing Assessment» - http://onlinelibrary.wiley.com/doi/10.1111/ejed.12018/pdf - which focus on the issue of wishing to change curricula and learning objetives and keeping the assessment based on testing.

This focus on knowledge and facts examinations anf formal testing really disturbs me. So much effort is made for the sake of exams, learners training, logistics that costs a lot of money (surveillance, police involved in distribution of national exams, etc) and a few weeks later all this factual knowledge is erased from the students' brains. If I were subjected to my 9th grade exam on math or chemistry, today,  I would certainly fail.

I agree with the viewpoint of the article stating that assessment strategies should focus on more «holistic key competencies and 21st century skills... assessment must transcend the Testing Paradigm».

In Youtube there's a set of dialogues between Mac PrensKy and Stephen Heppell on education, and this one on assessment - http://www.youtube.com/watch?v=6tmD1caEFWQ&feature=share&list=PLRyiLbvPK9FyfaeQnwRfxW-6HGmi_UhnT

Regarding Learning Design and online courses evaluation, the same viewpoint applies, new key competences are required and formative evaluation has an important role to improve learners learning, based on project development, problem-solving and good qualitiy products resulting from a learning process, ePortfolios should be promoted and be the evidence of that learning.

Penny Bentley

unread,
Mar 8, 2013, 1:29:28 AM3/8/13
to olds-mo...@googlegroups.com
Ida, re ePortfolios, what is your opinion on platforms used? I'm a big fan of student owned/managed ePortfolios on platforms such as wordpress. An eP that's owned by and moves with the learner thougout life. Another popular option here in Aus is the use of platforms owned/managed by institutions such as Mahara & PebblePad. Any comment?
Reply all
Reply to author
Forward
0 new messages