I would definitely recommend that we spend some quality time with
Latour's door-closer article.
I would humbly suggest that we all agree to read it by Sept. 15. On
Sept. 16, we can post to the list answers to the following question
(just for starters):
1. Re: "delegation." Latour notes,
"If we compare the work of disciplining the groom with the work he
substitutes for, according to the list defined above, we see that this
delegated character has the opposite effect to that of the hinge: a
simple task -forcing people to close the door- is now performed at an
incredible cost; the minimum effect is obtained with maximum spending
Latour notes that human tasks are delegated to the door -- to ill
effects. What do we do when we delegate human tasks to technology? Are
we making humans stupid? Irresponsible?
Here's my potential answer:
When we delegate human tasks to technology, we often breed a sense of
moral ambiguity. For example, it is no longer morally required to say
"goodbye" to someone on MSN. When you go off-line, MSN will tell the
other person you're gone.
Just to be completely provocative, I think technology makes us impolite,
Can we design technology that does the opposite?
> -----Original Message-----
> From: theor...@googlegroups.com
> Behalf Of CHRISTINE Z. MILLER
> Sent: Wednesday, August 30, 2006 12:30 PM
> To: nsch...@hotmail.com; User Research Theory Study Group Planning
> Subject: Re: This Group Still Active?
> I'm also very interested in continuing with the group, although I
haven't been able to
> be as active as I'd like. I have downloaded the Latour article. It
would be great to
> get some discussion going around this.
> Chris Miller
> ---- Original message ----
> >Date: Wed, 30 Aug 2006 08:23:44 -0700
> >From: "Natasha" <nsch...@hotmail.com>
> >Subject: Re: This Group Still Active?
> >To: "User Research Theory Study Group Planning" <theory-
> >I'm still very interested in this group. "The Sociology of a Door"
> >piece does sound intriguing. I agree with Steve in that having a bit
> >of structure around the sorts of questions we should be asking of the
> >article as we read it, etc. Or maybe, Oliver, you could offer the
> >intital piece of discussion and we can let it evolve from there. Is
> >there an easy online link to this article that can be provided to the
> >Happy early Labour Day Weekend,
> Christine Miller, Ph.D. Candidate
> Business & Organizational Anthropology
> Wayne State University
> Cell ph#: (248) 914-0475
> "Those who would suppose that there is a logic which everyone would
agree to if
> he understood it, are more optimistic than those versed in the history
of logic have
> a right to be."
> C. I. Lewis, 1923
The information contained in this message is confidential. It is intended to be read only by the individual or entity named above or their designee. If the reader of this message is not the intended recipient, you are hereby notified that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message.
Latour argues that is it wrong to distinguish technological artifacts
from humans. Technology supplants the labour of humans, for example he
argues that a door effectively replaces the labour of having to first
break a hole in a wall and then rebuild it again in order to travel to
the other side. This is an astonishing feat, one that would take great
effort on the part of humans. Similarly, the "groom" replaces the work
of a human porter/doorman. However, this is not a perfect exchange of
one for the other. The character of the "groom" is changed following
the exchange of the human for the technological kind. It is a strict
porter, insisting on closing the door on its own terms rather than the
terms of the visitors: it fights with visitors who wish to hold the
door open and even slams it closed on those who do not enter quickly
* Describe a technology or a device, including its environment and
social setting. This should be something real, something you have
actually seen and have experience of.
* Now, imagine the same technology or device in human terms. Describe
its interactions with its users. What does it think of them? What is
does *it* think its job is? What did it loose in transformation
between a human-performed role and one performed by a machine?
* Finally, end with what lessons would you take from this for design,
if you were to take the Latour-type analysis literally. Judge the value
of these lesson in terms of convention design wisdom? Are they useful
or over-the-top? Did they miss something? Did they catch something
that you would otherwise have missed?
Another example - I got added to a newsletter a colleague sends out.
It's a business-related item, usually, although it got into a strange
religious area this time, and I felt like it crossed a line (even
though he kind of flagged that this might make us uncomfortable) - I
wanted off - but there was no unsub link.
I wasn't prepared to write and say 'take me off" but I did write and
say "you need an unsub link" - which got the response "good point -
do you want off?"
There's a face issue here; I couldn't do it. I want to unsub
silently; even if he gets the notification, I don't want the
transaction to take place in the same forum as the communication, for
fear of what it may signify.
He avoided technology (a more sophisticated distribution tool like
Google Groups) and used one-to-one email to try and manage a group
communication; the missing features of control that we the reader
don't have change our actions somehow.
I could see someone else (or me in a worse mood) seeing no link to
unsub and writing him personally in aggravation "TAKE ME OFF THIS!" -
which is more rude than simply clicking something so here gets a
message ("Steve Portigal has been removed from your list"). Which is
more rude? Which gives me more chance to be a jerk, or to be nicer?
Not much of a conclusion; it's a complex thing you raise!
I'm not yet very good at this summarising business, especially at 6 in
the morning, but off the top of my head, a couple of reactions...
This perspective of technological artifacts as delegatees (is that a
word?) is something very Latour-ian, it would seem, coming from
someone who's worked a lot with actor-network theory. From my
perspective though, what's interesting about it is that it is in
stark contrast to most conventional perspectives in HCI - where the
machine simply performs tasks that need to be done; the more
introspective of these tend to see machines - especially computing
systems - as things as reify processes. Coming from a tradition of
engineering concerned with automation, where the dominant concerns are
accuracy of machine behaviour, control mechanisms, and efficiency, it
is not surprising that machines in this perspective are visible only
as far as they are associated with tasks. Combine that with the then
predominant methods of eliciting task structure, and it's no wonder
that we technologists are where we are.
Treating the machine as a social actor in its own right seems to
provide a framework for integrating utterances like 'the groom is on
strike' with knowledge about actions performed by the machine. But I
think Latour isn't just talking about actions - he's also talking
about roles, which are a very social thing. Because a role - a
'groom', for instance - is not just a set of actions, it is also a
collection of responsibilities, and an identity.
I somehow think that this is linked with many of our confusions
regarding machines. Take for instance Asimov's robot fiction - it
describes a fear of machines that I now think arises from a conflict
between two perspectives on machines - they're just tools used by some
actor who employs them and are therefore not accountable, or they're
(implicitly) actors in certain roles. When machines become autonomous,
those two attributes come together, and we can't resolve the ambiguity
there's another aspect to the automating perspective - computing
technology technology was, a few years ago, thought socially worthy
because of its disintermediating capabilities - a computer's, and
especially the Internet's, ability to connect people without all those
pesky middlemen. But the opposite seems to be happening - we're
witness the birth of all sorts of interesting intermediaries. Take the
case of airplane ticket booking; initially (and I may be wrong about
this), travellers bought tickets through travel agents. The first step
after computers was to connect flyers and airlines to each directly
through the web, and this was thought of as disintermediation. Then we
had travelocity and orbitz interposing between the airline and the
flyers to help them get the best deal - perhaps even some travel
agents use these sites. Now we have farecast.com trying to predict
airline fares across several airlines at some given date. That is,
we're evolving 'automated' systems that are continually more
AI & classical cognitive science theories describe three aspects of
agents: an agent as 1) a seat of reason, a rational system, b) an
entity with autonomy, and c) as a representative or delegate. In this
sense, the groom can be said to have a logic internal to its operation
- the spring vs hydraulic ones have different logics; the groom is
autonomous in that it operates by itself, and to the groom has been
delegated the task of closing the door. That's where the 'cost' of
technologizing comes in - as we're replacing one sort of agent (human)
with another (non-human), we are replacing one set of social relations
with another. We're replacing a human whom we can haggle with, cajole,
plead, order, bribe, and empathise with by another actor who has more
or less of these attributes. Since many of these attributes
(affordances?) are very much part of our repertoire for navigating our
social space, technologizing makes systems that much less navigable.
So when Steve's friend uses a list of email addresses managed manually
instead of a mailing list, he's opting for a specific way of
navigating and managing social relations as opposed to another.
The question then, should not be "how can we design a better 'radio
sacking' system?" but rather "how can we maintain (or not maintain)
these social relations in question?". In which case Oliver's suggested
game should come in plenty handy.
Some issues to explore:
- what do the asymmetries between human and non-human actors imply for
how they deal and are dealt with in various situations?
- how do humans deal with each other when new and unforeseen means of
action are enabled/created by technologies? (that is, from an
ethnomethodological perspective, how do humans make a system work? how
do they produce and maintain coherent interpretations of a
technological system's behaviour? how do they account for and make a
place for its role in their life-situations? when are bugs not bugs,
- how do various cultures and societies differ/compare in how they
accord machines a place in the order of things? what are the kinds of
roles they are given? what attributes? (or, In which we hunt for the
iPod named 'Green Destiny'... * )
- most of us in the technology world think of people as 'using'
machines. That seems to be a very modern description. When is a use
not a use? When is it a ritual? A play? A performance? An ablution? A
custom, or tradition?
Boy, that post was a damn sight longer than I expected...
* c.f. Crouching Tiger, Hidden Dragon
And as users we are maybe acting more horizontally than vertically.
Ie, disintermediation suggests a straight line relationship with
fewer nodes on that line.
But if we are triangulating now (ask.metafilter.com, tripadvisor,
seatguru, and others) while making a plan, we've done something far
different than disintermediation - we're building our tree of
information and decision flows...
This is probably off the topic a little but....
Great big picture questions elsewhere in your message, Arvind. I
won't even pretend to speak to them (at least not at 8:30 in the
morning with my first bowl of sugar cereal still making my eyeballs
Steve addresses some of this in his discussion about the telephone.
The fact that machines introduce themselves into social practice only
emphasizes that they are social actors, capable of blending in,
adapting to their local culture - or not! On this, I think Naas and
Reeves's experiments are fascinating, maybe because what they say (to
me at least) seems so natural - don't we see it everyday, in less
clinical circumstances, when people shout at machines? Latour might be
a bit too postmodern for some, a more modern explanation would the
meaning attributing qualities of people, but is the result not the
same? If so, do we really need to design machines modeled to replace
humans perfectly, thus leaving the social structure undisturbed, in
order that they fit into our society? If we keep to the old adage of
designing flexibility into interpretation, will this not happen
But what if we don't design for flexibility? Let me ask, did we really
loose the travel agents? Today, when buying direct don't we go through
an automated version of the "ideal / controlled" travel agent, from the
perspective of the airline, the agent of the airliner - a "radiosack"
for customers? But what of the helper systems that have sprung up
around them, the e-bookers et al? Who do they replace? Who were they
modelled on? They cannot replace the travel agents since they were
replaced by the "book direct" agents. Are they not a sort of society
of their own? A network of original autonomous agents, talking
together, working out the odds at rising prices, trusting their
"experience." As an alternative example, Steve's problem with his
friend could be solved quiet cynically. In the technical sense, he
could automate his email system to route the offending mails directly
to trash - but isn't this like two conspirators keeping a dirty secret?
In each of these examples, a social space is occupied by a machine. In
the first, a human (the travel agents) was first replaced by the
technological "travel agent" of the airliners, but this in turn opened
a space which was filled with the "society" of automated agents that
never existed before. In the second, a conspiratorial friend, that
equally never existed before being is invented, is made out of thin
So how should we design? The ethnomethological response is exactly as
you say: observe human actors in detail and produce machines that mimic
exactly - do not disturb the status quo - very task analysis. Why not
design agents for change? Distinct and different actors, not modeled
on the humans, maybe not even designed to replace humans, but designed
to fill the social space only they will occupy. What would this look
like? I cannot help but this of 'Augmenting Human Intellect' in a
Vygotskian distributed intelligence-sort-of-way.
I agree - and that was my point - that instead of disintermediating,
we're really creating a society, as you put it, of intermediaries.
> same? If so, do we really need to design machines modeled to replace
> humans perfectly, thus leaving the social structure undisturbed, in
> order that they fit into our society? If we keep to the old adage of
> designing flexibility into interpretation, will this not happen
> So how should we design? The ethnomethological response is exactly as
> you say: observe human actors in detail and produce machines that mimic
> exactly - do not disturb the status quo - very task analysis. Why not
> design agents for change? Distinct and different actors, not modeled
> on the humans, maybe not even designed to replace humans, but designed
> to fill the social space only they will occupy. What would this look
> like? I cannot help but this of 'Augmenting Human Intellect' in a
> Vygotskian distributed intelligence-sort-of-way.
That wasn't quite what I meant - if you really do take an
ethnomethodological perspective, a technological artifact doesn't
really possess flexibility of interpretation - except in a semiotic
sense. The important thing is that humans interpret flexibly, so much
so that according to ethnomethodologists, it might be logically
impossible to specify all the interpretations that humans can
generate. In other words, given a system of machines, ethnomethodology
would say that what is important is not only what the machines can or
cannot do - its features/bugs, or abilities as a social actor - but
the interpretive work humans do to create and maintain a coherent set
of relationships and interpretations.
Thus, it is not important to make machines that mimic humans exactly,
nor are we interested in a 'task-analysis' as such, because that puts
boundaries (temporal, spatial, social) on the tasks/actions, while
ethnomethodology claims that interpretative work is ceaseless and
produces continually changing interpretations. Once, however, we have
a sufficiently detailed account of how systems of machines are made to
work, we can then think about designing machines that occupy their own
space, and afford certain kinds of interpretation, and fail to support
others. This might even lend itself to some form of morphological
analysis... For instance, if we can identify a configuration of
interpretive abilities and interpretations that we know, from our
accounts of work, to be not sustainable, we can avoid the associated
design.. or not avoid it, if you're a political scientist ;)
So we seem to agree, but for different reasons :)
I'm sure Garfinkel's work on breaching experiments might shed some
light on this - are technological interventions a form of breaching?
Pushing this idea further: breaching experiments have three essential
1. the fundamental assumptions of the reality being breached must
2. there must be no one around to support the subject's reality
3. the subject must not be given enough time to develop an
alternate interpretation of events
Can we then think of technologically-induced angst as the result of a
massive, subtle breaching on the part of the technologists/designers
in question? The mind boggles...
I'm not sure the 'augmenting human intellect' approach is very new to
design - first of all, most designers probably already think of most
designed objects as things that affect and shape human action &
cognition; isn't that what infovis is about? What we can design to
augment, though, is the abilities of the socio-technical *system* -
isn't that what most ICT for development projects are about?
who sees a CHI UCD4D workshop position paper coming out of this...
> Can we then think of technologically-induced angst as the result of a
> massive, subtle breaching on the part of the technologists/designers
> in question? The mind boggles...
... or the sign of some kind of social neurosis.
> I'm not sure the 'augmenting human intellect' approach is very new to
> design - first of all, most designers probably already think of most
> designed objects as things that affect and shape human action &
> cognition; isn't that what infovis is about? What we can design to
> augment, though, is the abilities of the socio-technical *system* -
> isn't that what most ICT for development projects are about?
... I'm really quiet turned away from the ICT for development thing
right now, for the kind of social neurosis / technologists' breaching
you mention above. I had thought about mentioning it when I wrote that
last paragraph - it certainly fits in to what I mean - but I think the
technological products made by ICT for development type projects now
are still just "agents" of their designers rather than independent
social actors, with "minds" of their own, making their own impact on
Of course I mean that the "augmenting human intellect" approach is not
very new also. I think, we agree but I didn't explain myself properly.
Rather than augmenting the sole individual intellect, by designing
independent *new* social actors that *add* something to a society, not
just replace or augment (à la ICT for dev) an already-present actor,
should we not augment the collective (distributed) intellect by adding
something new (and hopefully intelligent!) to it? I think this is what
you mean by designing to augment "the abilities of the socio-technical
system" (and why I think we agree), or is it?