Learning Analytics

13 views
Skip to first unread message

ViplavBaxi

unread,
Aug 27, 2010, 3:59:31 AM8/27/10
to Learning Analytics
Hi George,

A few thoughts:

You state that "Learning analytics is the use of intelligent data,
learner-produced data, and analysis models to discover information and
social connections, and to predict and advise on learning." and that
it transcends analytics as such and goes on to support "action,
curriculum mapping, personalization and adaptation, prediction,
intervention, and competency determination". (although Stephen would
prefer removing "competency" and introducing "capacity" as the
relevant term, if I understand him correctly).

There are different levels of information that can be captured by
technology and analyzed later by an analytics platform (whatever that
might look like).

Some are direct (user action based) measures (identities, clicks,
tweets, page/site hits, search keywords etc which can "roll-up" into
measures such as participation, connectedness etc); some are indirect
(other people dependant) measures (people rating a resource etc). But
the underlying fact is that these are implicit or explicit actions
that are easily recordable by technology.

These could be augmented by a host of digitizable traditionally
offline activities and behaviors such as amount of time spent in the
library, frequency of attending a classroom, number of questions asked
in a class etc. which have not found their way into building the
profile of a student in the online environment for some reason. Also
by data-in-context like the curriculum, the learning process itself,
institutional goals, reward mechanisms, location, access, immediate
network etc.

The difficult area is in judging things like is the learner
reflecting?, are arguments concise and logical, trust and many others
which have traditionally been subjectively assessed. They are
difficult to capture and even more difficult to assess, the latter
because some a priori modeling of the "ideal" conditions need to be
accomplished.

I am particularly intrigued by your concept of data trails - is it
somewhat similar to the Sliced PLE concept I have been thinking on?
(http://learnos.wordpress.com/2008/02/16/ple-and-soft-peer-review/)

You also state "The body of "knowledge" learners need to mastered to
be a psychologist, for example, can be contrasted with the data-trails
learners have left through formal and informal learning."

The concept of trails as you have outlined - different people will
learn differently - follow and make different connections - leave
different trails (to become a psychologist in your example) - and so
how does one contrast against multiple equally valid trails?

As a consequence, what is the predictive power that can be generated
on the basis of these trails and data that is captured? Two learners
going through the same trail may come out with totally different
learning results or vice versa. How does one determine an optimum/
benchmark trail for everyone?

If we think about analysis models, it gets a lot more complex. An
analytic model, if it can at all be created, would need to help churn
out concrete insights/predictions about a learner or a group that can
be addressed by feasible actions. One way is to refer to benchmarks or
ideal conditions and infer learner progress or behavior based on
deviations from the ideal. The other is to eschew benchmarks, arrive
at a model of several possible empirically behaviors and impose value
judgments on the basis of certain theories or ideas making it possible
to translate to concrete actions. Either ways, I dont think we can
never hope to identify and quantitatively model enough variables in
order to make a large enough analytic abstraction - we could perhaps
do that for closely scoped targets (like say within a particular
section of a community of practice or a particular course and a
limited umber of variables).

I think it is also important to keep in mind technical limitations in
terms of computational power and the time required to generate
analytics as the OPUS 2 team reminded me. We are dealing with some
heavy data analysis here which may not be possible to do real time as
the variables and inter-relationships increase exponentially.

In my opinion, we need to evolve new simpler methods focusing around
generating relevant metrics - for example, we should worry about the
argument or contribution not whether it is blogged or tweeted.

Viplav




George Siemens

unread,
Aug 30, 2010, 6:43:20 PM8/30/10
to learning...@googlegroups.com
Hi Viplav - always a pleasure to hear from you!

I've posted a few comments inline.

On Fri, Aug 27, 2010 at 1:59 AM, ViplavBaxi <vipla...@servitium.com> wrote:
Hi George,

A few thoughts:

You state that "Learning analytics is the use of intelligent data,
learner-produced data, and analysis models to discover information and
social connections, and to predict and advise on learning." and that
it transcends analytics as such and goes on to support "action,
curriculum mapping, personalization and adaptation, prediction,
intervention, and competency determination". (although Stephen would
prefer removing "competency" and introducing "capacity" as the
relevant term, if I understand him correctly).

**The difficulty with replacing competency with capacity rests in that all forms of evaluation for formal accreditation are based on competency. Capacity is more relevant in a business context, but a formal degree is an expression of competence, not capacity (right or wrong!). 

There are different levels of information that can be captured by
technology and analyzed later by an analytics platform (whatever that
might look like).

Some are direct (user action based) measures (identities, clicks,
tweets, page/site hits, search keywords etc which can "roll-up" into
measures such as participation, connectedness etc); some are indirect
(other people dependant) measures (people rating a resource etc). But
the underlying fact is that these are implicit or explicit actions
that are easily recordable by technology.
**yes - there are also measures that are inferred from our social network -things that we haven't intended to signal. For example, if most of my friends and family are affiliated with a certain religious group or political party, you can make assumptions (that might need to be validated with different types of analysis - language or otherwise) about what my views are.

These could be augmented by a host of digitizable traditionally
offline activities and behaviors such as amount of time spent in the
library, frequency of attending a classroom, number of questions asked
in a class etc. which have not found their way into building the
profile of a student in the online environment for some reason. Also
by data-in-context like the curriculum, the learning process itself,
institutional goals, reward mechanisms, location, access, immediate
network etc.

** Agreed
 

The difficult area is in judging things like is the learner
reflecting?, are arguments concise and logical, trust and many others
which have traditionally been subjectively assessed. They are
difficult to capture and even more difficult to assess, the latter
because some a priori modeling of the "ideal" conditions need to be
accomplished.

**yes, this is difficult. We face the same challenge now - students pass  our tests, but we really don't know if they understand the subject area.  
 
I am particularly intrigued by your concept of data trails - is it
somewhat similar to the Sliced PLE concept I have been thinking on?
(http://learnos.wordpress.com/2008/02/16/ple-and-soft-peer-review/)

** Not quite - data trails, as I mention it, is that constant stream of data that we leave - like a pheromone trail. This data trail includes a combination of intellectual, professional, belief, and personal data.  

You also state "The body of "knowledge" learners need to mastered to
be a psychologist, for example, can be contrasted with the data-trails
learners have left through formal and informal learning."

The concept of trails as you have outlined - different people will
learn differently - follow and make different connections - leave
different trails (to become a psychologist in your example) - and so
how does one contrast against multiple equally valid trails?
**Again, I'll turn to the notion of competence. The trail is important, but ultimately, how we get to be competent isn't nearly as important as the fact that we are competent in a field. So, it doesn't matter whether I learned about literature in a university classroom or as a result of personal passion sitting in a local library. 
Reply all
Reply to author
Forward
0 new messages