The original source:
http://www.darpa.mil/darpatech2002/presentations/iao_pdf/speeches/armour.pdf
I am presuming it is in the public domain (done by a US government employee
at the time in the course of regular employment duties).
The document touches on some of the goals I'm working towards in a FOSS way
with this: :-)
http://www.twirlip.net/
Although they may not be identical...
I'm a bit more inspired by Doug Englebart's older Augment work...
http://sloan.stanford.edu/MouseSite/dce-bio.htm
http://dougengelbart.org/
"""
The Doug Engelbart Institute was conceived by Doug Engelbart to further his
lifelong career goal of boosting our ability to better address complex,
urgent problems. As he saw it, both the rate and the scale of change are
increasing very rapidly worldwide, and we as a people must get that much
faster and smarter at anticipating, assessing, and responding to important
challenges collectively if we are to stay ahead of the curve, and thrive as
a planet. In other words, we must get faster and smarter at boosting our
Collective IQ. It is along this chosen career path that Engelbart became
prominent as a pioneer of the digital age. Best known for inventing the
computer mouse, he and his research team at SRI were responsible for many
pioneering firsts, originally showcased in his now-famous 1968 Demo. More on
this story in our History pages, a section of this website dedicated to
conserving and chronicling the past.
"""
More on Doug Engelbart's vision:
http://www.dougengelbart.org/about/vision-highlights.html
"My hypothesis is that ever-more effective "Dynamic Knowledge Repositories"
(DKRs) will be central to improving a community's Collective IQ �
essentially the capability, in dealing with a complex problem, for providing
the best, up-to-date understanding of the current state of both the problem
and of its solution efforts. Our Tool Systems would be endowed with Open
Hyper Tools specifically designed to rapidly improve our collective
process, and especially the ongoing organic emergence and utility of
comprehensive DKRs out of that process. Specially trained teams will be
involved, for instance to ingest the ongoing dialog, help in adapting to the
relevant ontological shifts, help monitor and solidify the "argument
structures" involved in seeking coherence and plausibility, etc. And also
for providing correctly associated "views" of the knowledge structure to
facilitate learning � probably different such viewing forms for different
categories of learners."
What would such a vision mean in an "open manufacturing" context? Or in a
social context as broad as GitHub where teams were much more adhoc and
overlapping?
From the 1960s through 1980s, there was a tension between Doug Englebart's
approach to "augment" human intelligence (either of individuals or
communities) and most of the rest of the people in the field who aimed to
replace individuals and communities with AI (the AI types like Marvin Minsky
and many others got almost all the money, while Doug Engelbart struggled
along later in a variety of contexts, trying to keep some business funding
going at Boeing).
http://www.softwarepreservation.org/projects/nlsproject/
See also:
http://memex.org/licklider.html
"J.C.R. Licklider may well be one of the most influential people in the
history of computer science. As Director of the Information Processing
Techniques Office (IPTO), a division of the Pentagon's Advanced Research
Projects Agency (ARPA), Licklider from 1963-64 put in place the funding
priorities which would lead to the Internet, and the invention of the
"mouse," "windows" and "hypertext." Together these elements comprise the
foundation of our networked society, and it owes much of its existence to
the man who held the purse-strings, and also created a management culture
where graduate students were left to run a multi-million dollar research
project."
And:
"Computing's Johnny Appleseed: Almost forgotten today, J.C.R. Licklider
mentored the generation that created computing as we know it."
http://www.technologyreview.com/Infotech/12040/
If you squint at the below document just right, you might also see something
of a definition for a system to deal with decision making in communities
about complex open manufacturing design issues. :-)
If one was trying to explain the difference between the aspirations of SKDB
(as "apt-get for hardware") and Stella/OSCOMAK/Pointrel/Twirlip (as aspiring
to be a library of information plus analysis, design, and communication
tools), that document maybe helps explain the difference?
Disclaimer: we were sub-subcontractors on the Genoa II project for a time,
as well as involved with using those ideas for Singapore's RAHS, as
mentioned here:
http://www.pdfernhout.net/reading-between-the-lines.html
http://www.pdfernhout.net/a-rant-on-financial-obesity-and-Project-Virgle.html
It's mainly the money we earned doing that (and also then from work that
flowed out of that in applying such ideas in business contexts) which has
given me the free time to think about the broader application of these ideas
in an open manufacturing direction as well as do some personal "analysis"
about a lot of socio-economic issues in an adhoc way... Stuff like this: :-)
http://www.pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html
http://knol.google.com/k/paul-d-fernhout/beyond-a-jobless-recovery
Or have time for things like my comments here:
"Next-generation robust distributed communications"
http://science.slashdot.org/comments.pl?sid=1753800&cid=33250830
on this:
http://science.slashdot.org/story/10/08/13/1959207/Rare-Sharing-of-Data-Led-To-Results-In-Alzheimers-Research
So, essentially, I think we need better tools so that everyone can reach
their own conclusions on such topics...
If for no other reason than this: :-)
http://en.wikipedia.org/wiki/Constructivism_%28learning_theory%29
Or including to be able to do this:
http://www.t0.or.at/delanda/meshwork.htm
"Indeed, one must resist the temptation to make hierarchies into villains
and meshworks into heroes, not only because, as I said, they are constantly
turning into one another, but because in real life we find only mixtures and
hybrids, and the properties of these cannot be established through theory
alone but demand concrete experimentation."
I do not know the current state of all those tools by all the groups
involved with that project. In usual government fashion, the project was not
required to be FOSS... So, things became proprietary, even though they were
built with public dollars...
http://www.pdfernhout.net/open-letter-to-grantmakers-and-donors-on-copyright-policy.html
My wife has some rights in some of the work she did ("non-commercial",
whatever that means, FOSS, or not?), and she has worked towards making such
things more generally accessible (but essentially rewriting stuff from
scratch anyway):
http://www.workingwithstories.org/
http://www.rakontu.org/
Also, the uproar about TIA (I'm not saying privacy concerns or concerns
about concentration of one-way surveillance power are not important) made
more difficult all of Genoa II as the whole thing got painted with the same
broad brush:
http://www.cognitive-edge.com/blogs/dave/2007/03/unwired.php
"However Wired manages to wander off the rails to fantasy land with its
reporting of the RAHS project. I realised when they contacted me that there
was a danger of them choosing to sensationalize the project by linking it to
the Total Information Awareness (TIA) project in DARPA and the name of John
Poindexter. So right up front I explained the difference. There had been two
DARPA projects, working off two very different philosophies. One (TIA)
sought to obtain and search all possible data to detect the possibility of
terrorist events. That raised civil liberties concerns and much controversy
in the USA leading to resignations and programme closure. A parallel program
Genoa II took a very different philosophy, based on understanding nuanced
narrative supporting the cognitive processes of decision makers and
increasing the number of cultural and political perspectives available to
policy makers. I was a part of that program, and proud to be so. It also
forms the basis of our work for RAHS and contains neither the approach, not
the philosophy of TIA."
Still, obviously, there is some overlap, because once you've datamined
events (potentially in a privacy-invading way), you want to make sense of
them, whether using email, wordprocessors, or some sort of fancy sensemaking
software. It certainly is an area fraught with ethical quagmires and various
other risks.
I implemented this particular window using jGL by the way: :-)
"Son of TIA: Pentagon Surveillance System Is Reborn in Asia"
http://www.wired.com/politics/onlinerights/news/2007/03/SINGAPORE
http://graphics.im.ntu.edu.tw/~robin/jGL/
Of all the things I've ever done in my technical career, from developing
Pet-Teach educational software as a teen in the 1970s, to writing a video
game to pay for part of college, to making a robot finger in Hans Moravec's
robot lab, to modding a Petster robot cat to be controlled over radio (in
the 1980s, before WiFi, using Forth SBCs and touchtone circuitry), to making
tools to create 3D shapes on the early SGI IRIS, to developing a simulation
of self-replicating robots on a Symbolics (perhaps the first simulation of
von Neumann's kinematic self-replicating ideas), to co-writing a garden
simulator, and to working on ideas that were like RDF before RDF, to
co-writing a paper on getting people to collaborate to design space
habitats, and so on, that screenshot of course has to be the one that
finally gets into Wired in a scaremongering article. :-) One of life's
little ironies, I suppose. :-)
But, again, that scaremongering is not without some legitimate concerns.
Still, in practice, what happens after scaremongering is probably that such
work just goes deeper underground in the organizations, and becomes less
accessible to everyone. Given that you can just assume organizations
(including businesses like Amazon and Google) are going to have such
sensemaking tools, should not everyone have at least most of those
intelligence-oriented collaborative sensemaking tools as FOSS?
http://www.davidbrin.com/transparent.htm
So we can use them for all sorts of things, including sensemaking about
collective needs or ways to meet them through open manufacturing designs?
--Paul Fernhout
http://www.pdfernhout.net/
====
The biggest challenge of the 21st century is the irony of technologies of
abundance in the hands of those thinking in terms of scarcity.
====
Mr. Tom Armour
Information Awareness Office
Genoa II
Genoa II is part of IAO's Total Information Awareness program. Genoa II is
the area with the light blue background.
We are planning to support a collaboration between two collaborations. One
of those two collaborations is composed of intelligence analysts, themselves
collaborating across organizational boundaries. Their goal might be
described as "sensemaking" developing a deep understanding of the terrorist
threat through the construction of structured arguments, models, and
simulations.
These become the basis for a collaboration with the second collaboration:
this one among policymakers and operators. Our focus is supporting
policymakers and operators at the most senior levels of government,
incidentally. Together, the two groups use the understanding captured in the
first group's arguments, models, and simulations to hypothesize about the
future, creating a set of scenarios that effectively cover and bound the
space of the plausible possible. With these at hand, the second group then
turns to the task of generating and evaluating options to respond to these
scenarios. Genoa II is all about creating the technology to make these
collaborations possible, efficient, and effective.
The unfortunate fact is that these collaborations today...if you can even
call them that...are done much as they were done twenty years ago. Means of
communication are telephone and fax. Maybe a Video Telecopy (VTC) now and
then...but VTC is more about alleviating the impact of traffic gridlock in
the Washington, DC, metropolitan region than improving the quality of the
collaboration. Absent are any tools to help people think together, or any
tools to support the collaboration itself as an enterprise.
I call this the bathtub, with most of the efforts placed on data gathering
and presentation, and analysis is de- emphasized. The government has indeed
been slow on this score. And we have paid for this with a history punctuated
with failures of intelligence and policymaking...last September being the
most extreme to date (and we all hope for all time).
To be fair, this stuff is hard, in part because of some of these challenges:
� Need faster systems of humans and machines�invert the "bathtub"
� Break down the information stovepipes.
� Overcome wetware limitations
� Deal with data biases, especially deliberate deception
� Rapidly and deeply understand complex and uncertain situations
Of course we need to be faster so that we can react more quickly: providing
warning sooner to aid preemption, increasing the range of options and the
probability of success. But we also have to be faster so that our national
security team can deal with more issues simultaneously. "The U.S. government
can only manage at the highest level a certain number of issues at one
time�two or three," said Michael Sheehan, the State Department's former
coordinator for counterterrorism. "You can't get to the principals on any
other issue. That's in any administration." Before September 11, terrorism
did not make that cut. As reported in the Washington Post.
That we continue to pay the price for stovepiped information systems seems
to be beyond doubt at this point. We must find ways to bring all relevant
information together while still enforcing appropriate use, releasability,
and privacy policies.
The "wetware" whose limitations I mentioned is the human cognitive system.
Its limitations and biases are well documented, and they pervade the entire
system, from perception through cognition, learning, memory, and decision.
Moreover, these systems are the product of evolution, optimized by evolution
for a world which no longer exists; it is not surprising then that, however
capable our cognitive apparatus is, it too often fails when challenged by
tasks completely alien to its biological roots.
Intelligence analysts are taught that every source, including human assets,
technical collectors, and open sources, impresses biases upon the
information provided. Knowing this and adequately compensating for this bias
are different matters, however. And, increasingly, our opponents are
manipulating our information sources to provide a false reality. There is
nothing new about this, of course; it has long been called "deception and
denial" in intelligence circles. What is new, however, are the powerful
capabilities of technology to manipulate almost any information channel and
produce intricately orchestrated deception campaigns.
And, of course, reality itself provides a huge challenge: complexity and
uncertainty. These characterize almost every issue that today's intelligence
analysts and policymakers engage with. Yet they must rapidly and deeply
understand these issues and often must do so in an environment marked by
urgency and turbulence.
By the way, it's not individual lone rangers that must do the work, but
teams of specialists drawn from a plethora of organizations�law enforcement
and intelligence, federal, state, and local�who must collaborate in an
enterprise that crosses existing organizational and hierarchical boundaries.
Doing so while maintaining necessary control and accountability is a huge
challenge.
Finally, it is not enough to deeply understand and construct effective
preemptive options...this all must be explained in a persuasive way to other
stakeholders and overseers�a reality that is too often overlooked.
Genoa II's predecessor program, Genoa, was about getting "smarter" results
by harnessing the diversity of lone rangers, bringing them together as a
team and supporting them with technology to discover relevant information,
reason systematically about it, and capture and reuse knowledge created by
this and other teams. We believe that Genoa I did produce better, deeper
understandings of complex situations, but it did so at a price: speed. Lone
rangers are, after all, much more nimble that most teams.
With Genoa II, we want to improve on both dimensions, and do so with teams
that function at the edges of existing organizations while having access to
the information and other resources of the participating organizations. I
will talk about the three themes of the Genoa II program. These themes are:
becoming faster through automation; becoming smarter through what we are
calling cognitive amplification; and working more jointly through
center-edge collaborative technology.
The first theme�becoming faster through automation�involves applying
automation to the front and back ends of the analytic process so more time
is available for the actual analysis, which is, of course, the whole point.
The "front end" of the system refers to the beginning of the analytic
process where the tasks involve finding relevant data and then preparing it
to support the analytic task. "Back end" refers to the presentation of the
results of analysis, capturing strategic knowledge for reuse, and
maintaining the knowledge repository. In addition, by creating a better
environment to work in, we can achieve speed gains end-to-end.
So here are three bumper sticker phrases that capture Genoa II's automation
goals: read everything without reading everything; maintain a consistent and
accessible world view; and begin the trip to computers as servants to
partners to mentors. The first is perhaps a bit idiosyncratic if not
downright contradictory. What's the opposite of a tautology? But I think you
know what I mean. Today's analysts and policymakers acquire information
chiefly by reading documents or electronic facsimiles of them. Even today,
analysts spend much of their time pressing a button that issues the command
"NEXT DOC," to request the next document. We've got to get past the NEXT DOC
world. There is too much that must be read to actually read.
I summarize the focus of the back end activities as being about creating a
world view�the results of the knowledge work of teams of analysts�that is
internally consistent and maintained consistent even as the
understanding of various aspects of it evolve independently. And that is
accessible to team members and policymaker collaborators in an efficient but
persuasive way.
The final goal is to create a computing environment aware of its users'
contexts and goals in a deep and even thoughtful way, and can tailor that
environment intelligently and proactively and offer relevant resources:
information, tools, techniques, and other people for instance.
The second theme again is becoming smarter through cognitive amplification.
Consider this citation, from Daniel Dennett's book, "Kinds of Minds:"
quoting Bo Dahlbom and Lars-Erik Janlert: "Just as you cannot do very much
carpentry with your bare hands, there is not much thinking you can do with
your bare brain." I stumbled across this quote while reading Dennett's book
and was struck by how relevant it is to this second Genoa theme�and how odd
it was in the book context�but that's another story.
Tools for amplifying our intellect are nothing new, nor necessarily employ
high tech. Consider, for instance, paper and pencil. Even this simple
Cognitive Amplification System (CAS) exhibits what I consider to be the two
essential elements of tools for thinking: they permit people to structure
their thinking in some way, and to externalize it. Not only does this
improve the quality of the intellectual work, but it makes it possible for
people to think together, which, as I've said, is an important theme for
Genoa II, both because people have to think together as they participate in
teams, and because you get better thinking when people think together
because you've harnessed diversity in knowledge, expertise, experience,
outlook, and so forth. And, again, we need cognitive amplifiers to help us
deal with complexity and uncertainty, and to overcome the limitations and
biases of our biological cogno-ware.
So we plan to build cognitive amplification tools for the four purposes
listed on the slide: modeling current state, estimating plausible futures,
performing formal risk analysis, and developing options. These tasks occur
more or less in sequence as the teams engage a new problem.
While Genoa I focused on tools for people to use as they collaborate with
other people, in Genoa II, we also are interested in collaboration between
people and machines. We imagine software agents working along with humans in
creating models using our tools...and having different sorts of software
agents also collaborating among themselves. Thus we imagine the three modes
of collaboration shown on the slide: people with people, machines with
machines, and people with machines.
Finally, the third theme�a collaborative environment that supports work at
the edges of existing organizations, and supports the sort of bottom-up,
self-organizing, and self-directing team work that we imagine will be
essential to combating networked threats.
Sure, we have teamwork today, but the necessary process and policy support
is provided by existing hierarchical organizations. The rub comes when
people come together from very different organizations with very different
policies and processes. Such teams will need to create and negotiate these
things themselves, quickly and effectively, and on the fly. So we intend to
provide support for the full life cycle of "edge" teams...applications and
data bases to support the work itself�resource discovery, role negotiation,
policy development and enforcement, planning, executing monitoring,
strategic knowledge capture, after action review�the lot.
Equally challenging will be supporting the coexistence of such teams
operating on the edge with the centers of their multiple home institutions.
We must find ways to provide the control and accountability that such
organizations demand of their members, as well as ways to tap into
center-based resources and make them available to the team while complying
with the use policies of the providing organizations. Needless to say, we
haven't begun to do this effectively, and getting traction on the problem
will require the inspired application of technology as well as innovation in
policy and process design. And, I might add, it will require paying serious
attention to creating what I'll call an "intentional culture" that is
supportive of this way of working. Indeed, we see the challenge here as
being creating coordinated revolutions in the domains of technology, process
and policy, and culture.
So this is Genoa II. Faster, smarter, and jointer�three major themes, with a
number of initiatives under each. Five years to create three coordinated
revolutions. Thank you for your attention.