Fwd: [learninganalytics] CALL FOR PAPERS: JEDM SPECIAL ISSUE: "EDUCATIONAL DATA MINING ON MOTIVATION, META-COGNITION, AND SELF-REGULATED LEARNING"

0 views
Skip to first unread message

Philipp Schmidt

unread,
Aug 10, 2011, 5:52:12 PM8/10/11
to badg...@googlegroups.com, badge...@googlegroups.com
Maybe interesting for badge-lab (or badge-talk) or both? Is there
another group ...

P


---------- Forwarded message ----------
From: George Siemens <gsie...@gmail.com>
Date: 10 August 2011 14:06
Subject: [learninganalytics] CALL FOR PAPERS: JEDM SPECIAL ISSUE:
"EDUCATIONAL DATA MINING ON MOTIVATION, META-COGNITION, AND
SELF-REGULATED LEARNING"
To: learning...@googlegroups.com
Cc: "Ryan S.J.d. Baker" <rsb...@wpi.edu>


Hi all - see below for special issue on educational data mining...

---------- Forwarded message ----------
From: Ryan S.J.d. Baker <ry...@educationaldatamining.org>
Date: 8 August 2011 14:50
Subject: CALL FOR PAPERS: JEDM SPECIAL ISSUE: "EDUCATIONAL DATA MINING
ON MOTIVATION, META-COGNITION, AND SELF-REGULATED LEARNING"
To: edm-an...@freelists.org
Cc: Phil Winne <wi...@sfu.ca>, Kalina Yacef <kal...@it.usyd.edu.au>


CALL FOR PAPER SUBMISSIONS FOR SPECIAL ISSUE OF
THE JOURNAL OF EDUCATIONAL DATA MINING

EDUCATIONAL DATA MINING ON
MOTIVATION, META-COGNITION, AND SELF-REGULATED LEARNING

Guest Editors
Ryan S.J.d. Baker, Worcester Polytechnic Institute (rsb...@wpi.edu)
Philip H. Winne, Simon Fraser University (wi...@sfu.ca)

Aim of Special Issue

We invite paper submissions for a special issue of the peer-reviewed
Journal of Educational Data Mining that focuses on using educational
data mining methods to advance basic and applied research on the
nature and roles of motivation, meta-cognition, and self-regulated
learning (SRL) in learning sciences. Increasingly, it is acknowledged
that SRL processes interact in key fashions with motivational and
meta-cognitive processes. We seek papers that use EDM to explore these
interactions, as well as papers that explore any of these three areas
in isolation or in relation to other important processes and
constructs.

Papers should apply accepted or novel educational data mining methods
in rigorous, demonstrably valid ways to study these topics. We are
interested to assemble a collection of articles that explore how new
methods for measurement and analysis that EDM affords enable new
discoveries in these areas. Because an important goal of this special
issue is to educate researchers who are not familiar with the power
and benefits of data mining, papers should be written in a style that
is simultaneously meaningful to experts in data mining, and
educational for those who are entirely new to these methods. Data can
be drawn from any educational source (e.g. interaction logs,
questionnaire instruments, field observations, video or text replays,
collaborative chats, discussion forums) so long as it supports valid
inference; simulated data is not admissible for this special issue.
All papers must make a contribution to research in the domain studied
and must give full detail on the educational data mining methods used
to derive these contributions; it is not necessary, however, that a
paper make innovations in educational data mining methods although
these are, of course, welcome (so long as they are valid).

Review Process
As stipulated by JEDM reviewing guidelines, each submission will be
peer-reviewed by three colleagues in the field, including both members
of the JEDM editorial board plus reviewers chosen specifically for
this issue.

Submission Guidelines

We invite submissions of any length; see the JEDM submission
guidelines. All submissions can be made electronically via email to
Ryan S.J.d. Baker (rsb...@wpi.edu).

Deadlines

Please submit your manuscript by December 1, 2011. We plan a review
cycle of approximately three months so that you should receive
feedback and a decision by approximately March 1, 2011.

Please direct questions to the guest editors at rsb...@wpi.edu and
wi...@sfu.ca.

Barry Joseph

unread,
Aug 11, 2011, 2:01:45 PM8/11/11
to badge...@googlegroups.com
A really interesting challenge came up today. Maybe someone has a suggestion for a work around.

We are launching our latest school based badging project. In brief - real brief - the rubrics related to the badges are from 0-3. Youth will earn their achievement if their work earns a "3" rating. In order to get an achievement, youth will submit Voicethread portfolios of their work to the committee associated with a particular badge.

If a youth get a "3" they receive the achievement. If they get less than 3, they only get feedback. That means there is no scaffolding to advance towards an achievement - it's a boolean system. You get it or you don't.

If the process were automated, we could offer leveled achievements - Informational Literacy Achievement, L. 1, Informational Literacy Achievement, L. 2, etc. That would be ideal for the students, and be more game-like (by rewarding for smaller tasks). But for the school staff, the work would be overwhelming, more than four times the work currently planned.

Does any one have any ideas about how to find a balance between these two extremes - between a boolean system with no scaffolding and a scaffolded system the requires more human resources than available?

Barry Joseph
Global Kids

Alex Halavais

unread,
Aug 11, 2011, 2:36:41 PM8/11/11
to Badge Talk
But I presume that feedback is not just "you did this wrong" but
rather, "here are the things you can do to improve your work." It
seems like that may be a bit harder depending on the task. If it's a
"one off" sort of thing, there isn't much room for improvement.
But--although my publications are always accepted "as is" ;)--I hear
that some people get revision requests on manuscripts. It seems like
that's a built-in piece of the assessment process.

IIRC, you only have specific windows of time when work is accepted,
and so it may be difficult to have to wait until the next assessment
period to have another go, but that also means you are less likely to
get people submitting work that they know won't get past the zero.

As I noted at our meeting, I hadn't really planned on it, but one of
the advantages of the system I had was that people routinely failed to
make the requirement, but learned quite a bit in the process of
getting there. I'm trying to figure out how to make that go more
smoothly, and I suspect I'll keep something similar to what I have
now: badges can be granted, but you don't really "fail" a badge, it
just stays in limbo until you either withdraw it or you improve it
enough to be accepted...

Alex

Erin Knight

unread,
Aug 11, 2011, 2:36:41 PM8/11/11
to badge...@googlegroups.com
Hey Barry -

If you allow students to fix the issues in the work and resubmit, is that too much of a human resource overhead? We are working on a system for P2PU that has rubrics aligned with each assessment and the reviewer compares the work to the rubric and checks off element by element which the learner has met. Then they give an overall 0-4 rating of 1=does not meet any of the criteria, 2=does not meet most of the criteria, 3=meets most of the criteria, 4=meets all of the criteria. (We have not decided what the cut off for the badge is but probably an average between 3 and 4 across several reviewers) With each rating, the learner also gets back the rubric information so that they can see exactly what they did not meet. They can update their work and resubmit to try to reach the badge/achievement level. 

This approach gives a bunch of information back to the learner and has the potential for a lot of learning to occur within the assessment. It does require more burden on the reviewers b/c they have to review it again though. At the very least, you could provide that kind of information back to the learner even without the opportunity to update the work and resubmit so that they could learn how to improve moving forward.

-E
--
Assessment and Badge Project Lead
Mozilla and P2PU

Barry Joseph

unread,
Aug 11, 2011, 5:24:27 PM8/11/11
to badge...@googlegroups.com
Thank you, both Alex and Erin. I think I hear you both saying all work should receive detailed feedback that can be incorporated into a future submission. I would agree. And our system is very close to the one, Erin, you described. 

However, the concern is that if submissions only receive feedback, but no "reward" or recognition for, say, having moved from a 1 rubric to a 2 rubric, that might create negative incentive. We want to set the bar high but provide encouragement every step along the way.

So we might need to include in feedback some pats on the back for progress, which can happen within a small school. We are also thinking about power-ups, which originally were for only those who earned badges, and creating less significant, one-time use power-ups for those making substantive, but less than badge, achievements. That might be the way to provide learners the support they need as they struggle towards their badges without increasing the human resources.

Barry



Sent from my Palm Pre


Reply all
Reply to author
Forward
0 new messages