Comments on Latour?

3 views
Skip to first unread message

Sam Ladner

unread,
Sep 3, 2006, 4:02:09 PM9/3/06
to ae0...@wayne.edu, nsch...@hotmail.com, User Research Theory Study Group Planning
Hi all,

I would definitely recommend that we spend some quality time with
Latour's door-closer article.

I would humbly suggest that we all agree to read it by Sept. 15. On
Sept. 16, we can post to the list answers to the following question
(just for starters):

1. Re: "delegation." Latour notes,

"If we compare the work of disciplining the groom with the work he
substitutes for, according to the list defined above, we see that this
delegated character has the opposite effect to that of the hinge: a
simple task -forcing people to close the door- is now performed at an
incredible cost; the minimum effect is obtained with maximum spending
and spanking."

Latour notes that human tasks are delegated to the door -- to ill
effects. What do we do when we delegate human tasks to technology? Are
we making humans stupid? Irresponsible?

Here's my potential answer:
When we delegate human tasks to technology, we often breed a sense of
moral ambiguity. For example, it is no longer morally required to say
"goodbye" to someone on MSN. When you go off-line, MSN will tell the
other person you're gone.

Just to be completely provocative, I think technology makes us impolite,
irresponsible imbeciles.

Can we design technology that does the opposite?


~~~~~~~~~~~~
Sam Ladner
Account Planner
CRITICAL MASS
12th Floor
11 King Street West
Toronto, ON
M5H 4C7
e: sa...@criticalmass.com
v: 416.673.5275 ext. 3244
~~~~~~~~~~~~

> -----Original Message-----
> From: theor...@googlegroups.com
[mailto:theor...@googlegroups.com] On
> Behalf Of CHRISTINE Z. MILLER
> Sent: Wednesday, August 30, 2006 12:30 PM
> To: nsch...@hotmail.com; User Research Theory Study Group Planning
> Subject: Re: This Group Still Active?
>
>
> I'm also very interested in continuing with the group, although I
haven't been able to
> be as active as I'd like. I have downloaded the Latour article. It
would be great to
> get some discussion going around this.
>
> Chris Miller
>
> ---- Original message ----
> >Date: Wed, 30 Aug 2006 08:23:44 -0700
> >From: "Natasha" <nsch...@hotmail.com>
> >Subject: Re: This Group Still Active?
> >To: "User Research Theory Study Group Planning" <theory-
> pl...@googlegroups.com>
> >
> >
> >I'm still very interested in this group. "The Sociology of a Door"
> >piece does sound intriguing. I agree with Steve in that having a bit
> >of structure around the sorts of questions we should be asking of the
> >article as we read it, etc. Or maybe, Oliver, you could offer the
> >intital piece of discussion and we can let it evolve from there. Is
> >there an easy online link to this article that can be provided to the
> >group?
> >
> >Happy early Labour Day Weekend,
> >Natasha
> >
> >
> >>
> Christine Miller, Ph.D. Candidate
> Business & Organizational Anthropology
> Wayne State University
>
> Cell ph#: (248) 914-0475
>
> "Those who would suppose that there is a logic which everyone would
agree to if
> he understood it, are more optimistic than those versed in the history
of logic have
> a right to be."
> C. I. Lewis, 1923
>
>
>

The information contained in this message is confidential. It is intended to be read only by the individual or entity named above or their designee. If the reader of this message is not the intended recipient, you are hereby notified that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message.

sony-youth

unread,
Sep 5, 2006, 1:58:56 PM9/5/06
to User Research Theory Study Group Planning
One easy (and I hope fun) assignment question is a game that a
colleague and I played after rading the piece:

Latour argues that is it wrong to distinguish technological artifacts
from humans. Technology supplants the labour of humans, for example he
argues that a door effectively replaces the labour of having to first
break a hole in a wall and then rebuild it again in order to travel to
the other side. This is an astonishing feat, one that would take great
effort on the part of humans. Similarly, the "groom" replaces the work
of a human porter/doorman. However, this is not a perfect exchange of
one for the other. The character of the "groom" is changed following
the exchange of the human for the technological kind. It is a strict
porter, insisting on closing the door on its own terms rather than the
terms of the visitors: it fights with visitors who wish to hold the
door open and even slams it closed on those who do not enter quickly
enough.

* Describe a technology or a device, including its environment and
social setting. This should be something real, something you have
actually seen and have experience of.

* Now, imagine the same technology or device in human terms. Describe
its interactions with its users. What does it think of them? What is
does *it* think its job is? What did it loose in transformation
between a human-performed role and one performed by a machine?

* Finally, end with what lessons would you take from this for design,
if you were to take the Latour-type analysis literally. Judge the value
of these lesson in terms of convention design wisdom? Are they useful
or over-the-top? Did they miss something? Did they catch something
that you would otherwise have missed?

Emily Ulrich

unread,
Sep 5, 2006, 2:19:18 PM9/5/06
to Oli...@sony-youth.com, User Research Theory Study Group Planning
Here's an entry for your game, Oliver:

email as stand-in for HR rep -

RadioShack lays off employees via e-mail
http://www.usatoday.com/tech/news/2006-08-30-radioshack-email-layoffs_x.htm

(http://blog.fastcompany.com/ calls it "radio sacking")

... maybe the email could have been worded more sensitively, but what else is email for but to handle issues of scale, like firing 400 people at once? Perhaps they could have crafted a beautiful exit experience and invited them all to attend.

I've been impressed by their humane group efforts in the past - Radio Shack actually did a very thorough job of culture change management when they moved these poor souls into the new HQ buildings a couple years ago. On the other hand, I suppose it's possible that the folks who handled that with such expertise have since been laid off...


Emily

sony-youth

unread,
Sep 5, 2006, 2:54:14 PM9/5/06
to User Research Theory Study Group Planning
So from the Latour point of view, how would you design a "better" radio
sacking system? Is it possible to replace a human in this way? Or is
that what the email has done? Replaced the human with a cold, blunt,
"You're fired, clean out your desk."

Steve Portigal

unread,
Sep 16, 2006, 11:15:00 AM9/16/06
to sa...@criticalmass.com, ae0...@wayne.edu, nsch...@hotmail.com, User Research Theory Study Group Planning
>Latour notes that human tasks are delegated to the door -- to ill
>effects. What do we do when we delegate human tasks to technology? Are
>we making humans stupid? Irresponsible?
>
>Here's my potential answer:
>When we delegate human tasks to technology, we often breed a sense of
>moral ambiguity. For example, it is no longer morally required to say
>"goodbye" to someone on MSN. When you go off-line, MSN will tell the
>other person you're gone.
>
>Just to be completely provocative, I think technology makes us impolite,
>irresponsible imbeciles.
>
>Can we design technology that does the opposite?

Of course, "impolite" is a cultural construction. Technology creates new rituals that we can be more or less deliberate about how we manage (their creation more than their use).

The telephone was a new technology that changed how people would open up communication with each other, and Alexander Graham Bell wanted people to say "Ahoy hoy" (good stuff at http://www.ahoyhoy.org/wordpress/about.php)

It never caught on, but hello did - although you can see this askmefi thread on different telephone greetings
http://ask.metafilter.com/mefi/18769 to see various other examples

I'd say the technology of the telephone - in terms of its rudeness - created an interaction without precedent and there was an attempt to culturally agree on how to handle this ritual.

[I'm sure there's a side thread about the cliche that they never show people saying "goodbye" when they hang up the phone on TV - is TV making us irresponsible :) ?]

Other communication technologies appear rapidly now - mobile phones, Skype, email, IM, SMS. And all seem to develop new standards for rituals, given that the usage is entirely different (i.e., I can start an IM conversation, I can have three of them going on at the same time, I can go have a shower and then come back and answer one, I can never answer another and that person might just "hang up" when they leave their office).

I'd suggest that these technologies are changing rapidly and the "norm" is in flux;

I'd say that what many of these communications technologies lack is any sense of context ("where are you?" being a question we might now ask if we call someone on their cell) and so interfaces that provide context info (mood, location, time online, away vs. busy vs. offline, etc.) are smoothing things for us. Does that make us more rude since we don't take extra effort to establish this information manually, as you imply, or does it prevent us from being "hurt" from inadvertent rudeness from others? YMMV.

Steve Portigal -- http://www.portigal.com/blog/
NEW: http://www.cultureventure.net

Steve Portigal

unread,
Sep 16, 2006, 11:45:00 AM9/16/06
to Oli...@sony-youth.com, User Research Theory Study Group Planning
Two suggestions for further reading/exploration

http://en.wikipedia.org/wiki/Uncanny_Valley

The Uncanny Valley is an unproven hypothesis of robotics concerning the emotional response of humans to robots and other non-human entities. It was introduced by Japanese roboticist Masahiro Mori in 1970. It states that as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes strongly repulsive. However, as the appearance and motion continue to become less distinguishable from a human being's, the emotional response becomes positive once more and approaches human-human empathy levels.

This gap of repulsive response aroused by a robot with appearance and motion between a "barely-human" and "fully human" entity is called the Uncanny Valley. The name captures the idea that a robot which is "almost human" will seem overly "strange" to a human being and thus will fail to evoke the requisite empathetic response required for productive human-robot interaction.
....


Computers Are Social Actors - Nass and Reeves- I can't find it online anymore (and gee, it was about 10 years ago that I last found it, soooo......) - but there was a group at Stanford CASA that dealt with the ways we react to technology in human ways and there were tons of papers posted online. Interesting stuff showing the results of various experiments. I remember a set of papers that dealt with various evaluation systems; the subject sat at one computer and used a system, and then a window came up that asked for subjective opinion on the experience; OR another computer terminal asked the same questions; result is that people were more critical when they had to tell the computer itself what they thought of the experience with it.

There is a frequently cited N&R paper by that title, but that's not necessarily the one I'm thinking about. The book is
Reeves, B. & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. New York: Cambridge University Press.

And Nass's (old) syllabus http://www.stanford.edu/~nass/Syllabus.Comm169.htm sounds pretty cool.

Steve Portigal

unread,
Sep 16, 2006, 11:51:00 AM9/16/06
to sa...@criticalmass.com, ae0...@wayne.edu, nsch...@hotmail.com, User Research Theory Study Group Planning

>
>Just to be completely provocative, I think technology makes us
impolite,
>irresponsible imbeciles.
>
>Can we design technology that does the opposite?

Another example - I got added to a newsletter a colleague sends out.
It's a business-related item, usually, although it got into a strange
religious area this time, and I felt like it crossed a line (even
though he kind of flagged that this might make us uncomfortable) - I
wanted off - but there was no unsub link.

I wasn't prepared to write and say 'take me off" but I did write and
say "you need an unsub link" - which got the response "good point -
do you want off?"

There's a face issue here; I couldn't do it. I want to unsub
silently; even if he gets the notification, I don't want the
transaction to take place in the same forum as the communication, for
fear of what it may signify.

He avoided technology (a more sophisticated distribution tool like
Google Groups) and used one-to-one email to try and manage a group
communication; the missing features of control that we the reader
don't have change our actions somehow.

I could see someone else (or me in a worse mood) seeing no link to
unsub and writing him personally in aggravation "TAKE ME OFF THIS!" -
which is more rude than simply clicking something so here gets a
message ("Steve Portigal has been removed from your list"). Which is
more rude? Which gives me more chance to be a jerk, or to be nicer?

Not much of a conclusion; it's a complex thing you raise!

Arvind Venkataramani

unread,
Sep 19, 2006, 7:31:22 AM9/19/06
to st...@portigal.com, sa...@criticalmass.com, ae0...@wayne.edu, nsch...@hotmail.com, User Research Theory Study Group Planning
Oliver: that was an interesting article, thanks for sending it our way.

I'm not yet very good at this summarising business, especially at 6 in
the morning, but off the top of my head, a couple of reactions...

This perspective of technological artifacts as delegatees (is that a
word?) is something very Latour-ian, it would seem, coming from
someone who's worked a lot with actor-network theory. From my
perspective though, what's interesting about it is that it is in
stark contrast to most conventional perspectives in HCI - where the
machine simply performs tasks that need to be done; the more
introspective of these tend to see machines - especially computing
systems - as things as reify processes. Coming from a tradition of
engineering concerned with automation, where the dominant concerns are
accuracy of machine behaviour, control mechanisms, and efficiency, it
is not surprising that machines in this perspective are visible only
as far as they are associated with tasks. Combine that with the then
predominant methods of eliciting task structure, and it's no wonder
that we technologists are where we are.

Treating the machine as a social actor in its own right seems to
provide a framework for integrating utterances like 'the groom is on
strike' with knowledge about actions performed by the machine. But I
think Latour isn't just talking about actions - he's also talking
about roles, which are a very social thing. Because a role - a
'groom', for instance - is not just a set of actions, it is also a
collection of responsibilities, and an identity.

I somehow think that this is linked with many of our confusions
regarding machines. Take for instance Asimov's robot fiction - it
describes a fear of machines that I now think arises from a conflict
between two perspectives on machines - they're just tools used by some
actor who employs them and are therefore not accountable, or they're
(implicitly) actors in certain roles. When machines become autonomous,
those two attributes come together, and we can't resolve the ambiguity
that arises.

there's another aspect to the automating perspective - computing
technology technology was, a few years ago, thought socially worthy
because of its disintermediating capabilities - a computer's, and
especially the Internet's, ability to connect people without all those
pesky middlemen. But the opposite seems to be happening - we're
witness the birth of all sorts of interesting intermediaries. Take the
case of airplane ticket booking; initially (and I may be wrong about
this), travellers bought tickets through travel agents. The first step
after computers was to connect flyers and airlines to each directly
through the web, and this was thought of as disintermediation. Then we
had travelocity and orbitz interposing between the airline and the
flyers to help them get the best deal - perhaps even some travel
agents use these sites. Now we have farecast.com trying to predict
airline fares across several airlines at some given date. That is,
we're evolving 'automated' systems that are continually more
agent-like.

AI & classical cognitive science theories describe three aspects of
agents: an agent as 1) a seat of reason, a rational system, b) an
entity with autonomy, and c) as a representative or delegate. In this
sense, the groom can be said to have a logic internal to its operation
- the spring vs hydraulic ones have different logics; the groom is
autonomous in that it operates by itself, and to the groom has been
delegated the task of closing the door. That's where the 'cost' of
technologizing comes in - as we're replacing one sort of agent (human)
with another (non-human), we are replacing one set of social relations
with another. We're replacing a human whom we can haggle with, cajole,
plead, order, bribe, and empathise with by another actor who has more
or less of these attributes. Since many of these attributes
(affordances?) are very much part of our repertoire for navigating our
social space, technologizing makes systems that much less navigable.

So when Steve's friend uses a list of email addresses managed manually
instead of a mailing list, he's opting for a specific way of
navigating and managing social relations as opposed to another.

The question then, should not be "how can we design a better 'radio
sacking' system?" but rather "how can we maintain (or not maintain)
these social relations in question?". In which case Oliver's suggested
game should come in plenty handy.

Some issues to explore:
- what do the asymmetries between human and non-human actors imply for
how they deal and are dealt with in various situations?

- how do humans deal with each other when new and unforeseen means of
action are enabled/created by technologies? (that is, from an
ethnomethodological perspective, how do humans make a system work? how
do they produce and maintain coherent interpretations of a
technological system's behaviour? how do they account for and make a
place for its role in their life-situations? when are bugs not bugs,
but features?)

- how do various cultures and societies differ/compare in how they
accord machines a place in the order of things? what are the kinds of
roles they are given? what attributes? (or, In which we hunt for the
iPod named 'Green Destiny'... * )

- most of us in the technology world think of people as 'using'
machines. That seems to be a very modern description. When is a use
not a use? When is it a ritual? A play? A performance? An ablution? A
custom, or tradition?

Boy, that post was a damn sight longer than I expected...

-- arvind

* c.f. Crouching Tiger, Hidden Dragon

Steve Portigal

unread,
Sep 19, 2006, 11:33:17 AM9/19/06
to Arvind Venkataramani, sa...@criticalmass.com, ae0...@wayne.edu, nsch...@hotmail.com, User Research Theory Study Group Planning

>there's another aspect to the automating perspective - computing
>technology technology was, a few years ago, thought socially worthy
>because of its disintermediating capabilities - a computer's, and
>especially the Internet's, ability to connect people without all those
>pesky middlemen. But the opposite seems to be happening - we're
>witness the birth of all sorts of interesting intermediaries. Take the
>case of airplane ticket booking; initially (and I may be wrong about
>this), travellers bought tickets through travel agents. The first step
>after computers was to connect flyers and airlines to each directly
>through the web, and this was thought of as disintermediation. Then we
>had travelocity and orbitz interposing between the airline and the
>flyers to help them get the best deal - perhaps even some travel
>agents use these sites. Now we have farecast.com trying to predict
>airline fares across several airlines at some given date. That is,
>we're evolving 'automated' systems that are continually more
>agent-like.


And as users we are maybe acting more horizontally than vertically.
Ie, disintermediation suggests a straight line relationship with
fewer nodes on that line.

But if we are triangulating now (ask.metafilter.com, tripadvisor,
seatguru, and others) while making a plan, we've done something far
different than disintermediation - we're building our tree of
information and decision flows...

This is probably off the topic a little but....

Great big picture questions elsewhere in your message, Arvind. I
won't even pretend to speak to them (at least not at 8:30 in the
morning with my first bowl of sugar cereal still making my eyeballs
jittery)

sony-youth

unread,
Sep 20, 2006, 2:54:03 PM9/20/06
to User Research Theory Study Group Planning
Arvind, wow! what an amazing post. Thank you. I think your question
about "how can we maintain (or not maintain) social relations" when
designing systems is exactly the reason why this article struck me as
so exciting in the first place. I don't know if maintaining is wholly
necessary, though, but I do come from a political science perspective
:-)

Steve addresses some of this in his discussion about the telephone.
The fact that machines introduce themselves into social practice only
emphasizes that they are social actors, capable of blending in,
adapting to their local culture - or not! On this, I think Naas and
Reeves's experiments are fascinating, maybe because what they say (to
me at least) seems so natural - don't we see it everyday, in less
clinical circumstances, when people shout at machines? Latour might be
a bit too postmodern for some, a more modern explanation would the
meaning attributing qualities of people, but is the result not the
same? If so, do we really need to design machines modeled to replace
humans perfectly, thus leaving the social structure undisturbed, in
order that they fit into our society? If we keep to the old adage of
designing flexibility into interpretation, will this not happen
naturally?

But what if we don't design for flexibility? Let me ask, did we really
loose the travel agents? Today, when buying direct don't we go through
an automated version of the "ideal / controlled" travel agent, from the
perspective of the airline, the agent of the airliner - a "radiosack"
for customers? But what of the helper systems that have sprung up
around them, the e-bookers et al? Who do they replace? Who were they
modelled on? They cannot replace the travel agents since they were
replaced by the "book direct" agents. Are they not a sort of society
of their own? A network of original autonomous agents, talking
together, working out the odds at rising prices, trusting their
"experience." As an alternative example, Steve's problem with his
friend could be solved quiet cynically. In the technical sense, he
could automate his email system to route the offending mails directly
to trash - but isn't this like two conspirators keeping a dirty secret?

In each of these examples, a social space is occupied by a machine. In
the first, a human (the travel agents) was first replaced by the
technological "travel agent" of the airliners, but this in turn opened
a space which was filled with the "society" of automated agents that
never existed before. In the second, a conspiratorial friend, that
equally never existed before being is invented, is made out of thin
air.

So how should we design? The ethnomethological response is exactly as
you say: observe human actors in detail and produce machines that mimic
exactly - do not disturb the status quo - very task analysis. Why not
design agents for change? Distinct and different actors, not modeled
on the humans, maybe not even designed to replace humans, but designed
to fill the social space only they will occupy. What would this look
like? I cannot help but this of 'Augmenting Human Intellect' in a
Vygotskian distributed intelligence-sort-of-way.

Arvind Venkataramani

unread,
Sep 21, 2006, 5:10:14 AM9/21/06
to Oli...@sony-youth.com, User Research Theory Study Group Planning
> But what if we don't design for flexibility? Let me ask, did we really
> loose the travel agents? Today, when buying direct don't we go through
> an automated version of the "ideal / controlled" travel agent, from the
> perspective of the airline, the agent of the airliner - a "radiosack"
> for customers? But what of the helper systems that have sprung up
> around them, the e-bookers et al? Who do they replace? Who were they
> modelled on? They cannot replace the travel agents since they were
> replaced by the "book direct" agents. Are they not a sort of society
> of their own? A network of original autonomous agents, talking
> together, working out the odds at rising prices, trusting their
> "experience."

I agree - and that was my point - that instead of disintermediating,
we're really creating a society, as you put it, of intermediaries.

> same? If so, do we really need to design machines modeled to replace
> humans perfectly, thus leaving the social structure undisturbed, in
> order that they fit into our society? If we keep to the old adage of
> designing flexibility into interpretation, will this not happen
> naturally?

> So how should we design? The ethnomethological response is exactly as


> you say: observe human actors in detail and produce machines that mimic
> exactly - do not disturb the status quo - very task analysis. Why not
> design agents for change? Distinct and different actors, not modeled
> on the humans, maybe not even designed to replace humans, but designed
> to fill the social space only they will occupy. What would this look
> like? I cannot help but this of 'Augmenting Human Intellect' in a
> Vygotskian distributed intelligence-sort-of-way.

That wasn't quite what I meant - if you really do take an
ethnomethodological perspective, a technological artifact doesn't
really possess flexibility of interpretation - except in a semiotic
sense. The important thing is that humans interpret flexibly, so much
so that according to ethnomethodologists, it might be logically
impossible to specify all the interpretations that humans can
generate. In other words, given a system of machines, ethnomethodology
would say that what is important is not only what the machines can or
cannot do - its features/bugs, or abilities as a social actor - but
the interpretive work humans do to create and maintain a coherent set
of relationships and interpretations.

Thus, it is not important to make machines that mimic humans exactly,
nor are we interested in a 'task-analysis' as such, because that puts
boundaries (temporal, spatial, social) on the tasks/actions, while
ethnomethodology claims that interpretative work is ceaseless and
produces continually changing interpretations. Once, however, we have
a sufficiently detailed account of how systems of machines are made to
work, we can then think about designing machines that occupy their own
space, and afford certain kinds of interpretation, and fail to support
others. This might even lend itself to some form of morphological
analysis... For instance, if we can identify a configuration of
interpretive abilities and interpretations that we know, from our
accounts of work, to be not sustainable, we can avoid the associated
design.. or not avoid it, if you're a political scientist ;)

So we seem to agree, but for different reasons :)

I'm sure Garfinkel's work on breaching experiments might shed some
light on this - are technological interventions a form of breaching?
Pushing this idea further: breaching experiments have three essential
characteristics
1. the fundamental assumptions of the reality being breached must
be challenged
2. there must be no one around to support the subject's reality
3. the subject must not be given enough time to develop an
alternate interpretation of events

Can we then think of technologically-induced angst as the result of a
massive, subtle breaching on the part of the technologists/designers
in question? The mind boggles...

I'm not sure the 'augmenting human intellect' approach is very new to
design - first of all, most designers probably already think of most
designed objects as things that affect and shape human action &
cognition; isn't that what infovis is about? What we can design to
augment, though, is the abilities of the socio-technical *system* -
isn't that what most ICT for development projects are about?

-- arvind
who sees a CHI UCD4D workshop position paper coming out of this...

sony-youth

unread,
Sep 22, 2006, 5:39:58 AM9/22/06
to User Research Theory Study Group Planning

Arvind Venkataramani wrote:
> So we seem to agree, but for different reasons :)
I think we do too.

> Can we then think of technologically-induced angst as the result of a
> massive, subtle breaching on the part of the technologists/designers
> in question? The mind boggles...

... or the sign of some kind of social neurosis.

> I'm not sure the 'augmenting human intellect' approach is very new to
> design - first of all, most designers probably already think of most
> designed objects as things that affect and shape human action &
> cognition; isn't that what infovis is about? What we can design to
> augment, though, is the abilities of the socio-technical *system* -
> isn't that what most ICT for development projects are about?

... I'm really quiet turned away from the ICT for development thing
right now, for the kind of social neurosis / technologists' breaching
you mention above. I had thought about mentioning it when I wrote that
last paragraph - it certainly fits in to what I mean - but I think the
technological products made by ICT for development type projects now
are still just "agents" of their designers rather than independent
social actors, with "minds" of their own, making their own impact on
the world.

Of course I mean that the "augmenting human intellect" approach is not
very new also. I think, we agree but I didn't explain myself properly.
Rather than augmenting the sole individual intellect, by designing
independent *new* social actors that *add* something to a society, not
just replace or augment (à la ICT for dev) an already-present actor,
should we not augment the collective (distributed) intellect by adding
something new (and hopefully intelligent!) to it? I think this is what
you mean by designing to augment "the abilities of the socio-technical
system" (and why I think we agree), or is it?

Reply all
Reply to author
Forward
0 new messages