Re: [FedKM:330] Re: KM Standards? Knowledge architecture?

3 views
Skip to first unread message

Joe Firestone

unread,
Jun 12, 2009, 12:58:36 PM6/12/09
to fe...@googlegroups.com
Hi Karen,

Just to put a period on the exchange involving APQC, I've now had an e-mail discussion with Jim Lee of APQC on questions of methodological approaches and measurement and APQC's approach to things. I think the remarks both you and I made below were to the point including our joint view that APQC can probably provide a lot of valuable information for the Center about private sector KM practices, as well as my view expressing doubts about the adequacy of its methodological, measurement, and conceptual approaches.

In my latest e-mail to Jim I stated these conclusions based on our exchange.


". . . what you offer and what I think is necessary is recognized by both of us and is different. Since APQC is in the end driven by its benchmarking of existing Best Practices, I think its methodological approaches are limited to what already exists in KM, is being practiced somewhere, and has also been made available to APQC benchmarking activities.

This is, if you will, an internal, industry-based approach to methodology. That's fine as far it goes. But my remarks made to the FedKM group, expressed doubts about the adequacy of your approaches as a guide to measuring impact. This conjecture of mine might well be correct if KM impact measurement best practices within the KM industry are themselves inadequate. Based on my own very long experience in the social sciences (see my slightly dated resume here: http://www.kmci.org/media/RESJOE.pdf ) and exposure to many different kinds of measurement methodologies, and research on measurement practices in use within KM, I believe that these practices are inadequate. As long as APQC's methodologies reflect only the best of these, they too will not meet the need of Federal KM to measure the impact of KM interventions in such a way that KM can be accountable. More is necessary than simply the best that can now be found in the industry.

There's an old saying that I ran into years ago at, of all places, the US Census Bureau which says: "It's good enough for Government Work." This is a saying that the Government needs badly to eliminate from American business folklore. The only way to do that is to make Government work so high in quality that it often leads work done in the private sector. To have that happen in KM, we have to do a better job of measuring KM impact than we currently find in private sector best practices, and that will require better measurement methodology than is currently being practiced there."

Finally, to address your suggestion about "test-driving my theories" with the help of APQC, I really don't think that's appropriate. APQC is an organization that sells the information it gathers and organizes through its benchmarking activities to raise revenue for itself. The information it gathers and organizes becomes APQC's proprietary information and doesn't appear in the open literature, where it can be critiqued by interested parties. I have no reason to contribute to this store of proprietary information. I'd rather continue to "test-drive" my own theories and methods through my own consulting, training, and open literature writing activities performed at KMCI. I think that will be of much greater benefit to both KM and myself.

Best,


Joe

Joseph M. Firestone, Ph.D.
Managing Director
Knowledge Management Consortium International and
CKIM Program
www.kmci.org
www.kmci.org/alllifeisproblemsolving
703-461-8823


and

The Adaptive Metrics Center
www.adaptivemetricscenter.com

CKO
Executive Information System, Inc.
www.dkms.com
http://radio.weblogs.com/0135950




----- Original Message -----
From: "Joe Firestone" <ei...@comcast.net>
To: fe...@googlegroups.com
Sent: Tuesday, June 2, 2009 11:02:40 AM GMT -05:00 US/Canada Eastern
Subject: Re: [FedKM:330] Re: KM Standards?  Knowledge architecture?

Hi Karen,

Thanks for your offer. I'm always happy to talk to people. But I'm not sure what you mean by "test drive" my theories. Which theories did you mean? And how would APQC even begin to test drive them?

Best,


Joe


----- Original Message -----
From: "Karen Danis" <gkd...@comcast.net>
To: fe...@googlegroups.com
Sent: Tuesday, June 2, 2009 2:50:21 AM GMT -05:00 US/Canada Eastern
Subject: [FedKM:330] Re: KM Standards?  Knowledge architecture?

Joe,

 

Goodness…a point of agreement…a rarity…cause for celebration, eh…?!

 

WRT APQC, I don’t want to get into a detailed discussion of the concerns you raise, but since their President is a well-respected KM thought leader, and the metrics they use are usually pretty simple, I don’t share your fears.  If you wanted to speak with someone to test drive your theories, I might be able to arrange it.

 

I do appreciate the one comment you made, though, acknowledging that the benchmarking info might provide a lot of good material for a KM Center….that’s the key….we need to meet client needs for actionable information.

 

                Karen

 

From: fe...@googlegroups.com [mailto:fe...@googlegroups.com] On Behalf Of Joe Firestone
Sent: Monday, June 01, 2009 4:44 PM
To: fe...@googlegroups.com
Subject: [FedKM:318] Re: KM Standards? Knowledge architecture?

 

Hi Karen,

I'm pretty much in agreement with this provided "lessons learned," "Best Practices," and "knowledge architecture" are construed appropriately.

About your specific remarks regarding APQC, I really can't comment very much. I suppose their benechmarking information might provide a lot of good material for a Federal KM Center, but I also suspect that their conceptual work on KM won't survive a good critique, and I also suspect that their metrics work may be methodologically off, since I don't have the impression that they either excel in measurement theory and methodology, or have a very good knowledge processing conceptual framework on which to base their indicators, however much they may emphasize metrics and indicators.

Best,


Joe

----- Original Message -----
From: "Karen Danis" <gkd...@comcast.net>
To: fe...@googlegroups.com
Sent: Monday, June 1, 2009 12:23:04 PM GMT -05:00 US/Canada Eastern
Subject: [FedKM:297] KM Standards?  Knowledge architecture?


Joe,

 

Just one final comment on the matter of standards for KM practice:  I see them emerging over time as lessons learned, and then best practices, and finally (perhaps) as standards that the CKO Council would choose to adopt. 

 

The IM/IT community has adopted standards for very specific reasons—associated with an Enterprise Information Architecture, which was congressionally mandated.  As we’ve already mentioned, the real value of a standard is to further Integration and Interoperability.  Yet, as a “best practice” in another form it can aid efficiency  as a “validated practice” that would not require as many resources to research…like the reason for benchmarking.  (If APQC were less costly on an hourly basis, I’d suggest them as possible  hosts for a KM Center.  However, their benchmarking research comes at very, very reasonable cost; a corporate membership would certainly be something to consider.  The Dept of Navy has one…what would it take to expand it to all of Fed’l gov’t…??)

 

Seems like the concept of a Knowledge Architecture would be pertinent to us.  And perhaps we’d be asking, what conventions need to be in place to “grease” knowledge flow….knowledge transfer within and across Agencies.  That’s not a question to be answered in this forum, but perhaps a relevant tasking for the KM Center—led by the CKO Council.

 

                Karen

 

From: fe...@googlegroups.com [mailto:fe...@googlegroups.com] On Behalf Of Joe Firestone
Sent: Saturday, May 30, 2009 11:51 AM
To: fe...@googlegroups.com
Subject: [FedKM:252] Re: GAO/KAO

 


----- Original Message -----
From: "Karen Danis" <gkd...@comcast.net>
To: fe...@googlegroups.com
Sent: Saturday, May 30, 2009 1:36:30 PM GMT -05:00 US/Canada Eastern
Subject: [FedKM:248] Re: GAO/KAO

….more interleaving.

 

Karen3: . . . But, if funding is held out as the motivator for good work, that can happen with placement in the Exec branch coupled with responsible oversight.

 

Joe3: Right, but can placement in the Executive Branch accord the Federal CKO the independence needed to avoid "the strategy exception error"?

 

 

[snip]

 

 

Karen: Re standards:  I did not say that they would be the first priority.  But there is tremendous value in having a fundamental body of knowledge about KM that undergirds practice.  It’s pretty darn hard to persuade a line manager to accept what you have to offer if you can’t describe it.  I think it’s unfortunate that we can’t agree on a “good enough” set of definitions that line management will understand. 

 

Joe: No, you didn't say that standards were the first priority. But my point is that they should be a very late priority and should only be formulated after they have alreadt emreged and become nearly de facto. Moreover, even when they're codified at that point, it needs to be understood that standards were made to be transcended. They are part of the knowledge du jour, and the future they may very well be refuted. Any system of standards needs ti be formulated so that the standards are under continuous criticism and can be very rapidly replaced, when they are failing.

 

Karen3:  late priority?  I don’t think that will be natural; I suspect the CKO Council will want to do more to benefit Fed’l gov’t as a whole, laying the foundation for cross-agency collaboration.  Yet, I agree that they should be subject to periodic review—as is the case with many standards.

 

Joe3: Good. But let's resist the idea that the CKO council shoud develop standards in the short run, since nothing will do more than that to undermine distributed problem solving in KM. You know the motivations in Government very well. Once there's a standard, where's the Government employee who will go against it? Why not just let people find out what's been done, what the impact was, and what the track record of criticisms is? Then the real standards will emerge soo enough.

 

Joe 2: ou said: "there is tremendous value in having a fundamental body of knowledge about KM that undergirds practice." And I agree with that, but I also think that the primary value of having such a body of knowledge is that the knowledge claims constituting it can be continuously criticized and modified so that knowledge can change, grow, and get closer to the truth. And the primary danger in such KM or KBOK is that when people formulate them they most often do so by compiling the knowledge claims that are thought to be the best and by ignoring the track record of knowledge claims that underly current notions of "best." In fact, such a KM or KBOK is not a true knowledge base. It is just an information base because it doesn't answer the question "why do we think this is best," which is the first thing anyone who wants to use a real knowledge base will want to know. for themselves.

Karen3:  You’ve made the case for transforming KM into a science, with “validated” concepts that we practitioners can simply learn and apply.  (Who feels the need to explore the logic of Pi before using it…?)

 

Joe3: I didn't say anything about "science" or "validation." I don't believe in "validation" anyway, but only in showing people the "track record" of evaluation of alternative solutions and practices and letting them decide what they want to accept. If this is "science" then let's make the most of it. As for "Pi," it's a mathematical concept anyway not a knowledge claim. Of course, there are knowledge claims about the world associated with it. But these are always fallible, and, in principle, their track record is always relevant to decision about whether to accept them.

 

 

Joe 2: You also say: "It’s pretty darn hard to persuade a line manager to accept what you have to offer if you can’t describe it.  I think it’s unfortunate that we can’t agree on a “good enough” set of definitions that line management will understand." And while I agree with this, I also think that we cannot have what does not yet exist. In my "On Doing Knowledge Management" paper, linked to previously, I've described how and in what way we can arrive at agreed upon definitions. Unfortunately, that will take time and successful projects based on not yet agreed upon key definitions and concepts. Until then, we should not try to force agreement, or to reach it through a politically forged consensus that gives us a definition that is politically good enough, but obviously beset with vagueness, ambiguity, and/or inconsistencies. It's easy to fool line managers with a compromise definition approved by a committee. But there will be trouble later when the mistakes inherent in that definition find their way into projects. It is better to use a definition that provide clear guide to practice, metrics, and impact analysis than to use one that the majority of people agree upon but that lacks content.

 

Karen3:  So, are you saying that “a definition that [provides] clear guide to practice, metrics, and impact analysis” does not yet exist…??

 

Joe3: No, I'm saying that more than one such definition exists, but that we don't know yet which definition is most fruitful in guiding successful conceptual specifications, practice, metrics, and impact analyses. And I'm also saying that compromise definitions often take take definitions, both clear and unclear, and create from them definitions that are politically satisfying within committees, but not clear enough to support successful conceptual specifications, practice, metrics, and impact analyses.

 

 

Karen: When I was doing road trips for Dept of Navy KM, our definition of KM was one of the first slides I briefed.  If we KM practitioners did not present a unified face to line mgt, they might wonder why we appeared to disagree…which would undermine our credibility.

 

Joe2: Sorry. I think it's best to freely admit that KM is a discipline that is still young and that there is no agreement on definitions, but that it is essential to use a clear conception as the basis for projects. Then provide them with your own very clear definition and follow through with it in your project. Where one loses credibility is where one defines KM in a particular way and then does an intervention that, according to the definition, is not KM.

 

Karen3:  perhaps you can retain credibility in your universe if you say you’d like to implement a discipline that cannot be described.  But in dealing with engineers, scientists, and others with advanced degrees who recognize the value of scientific method, I would suggest it’s a nonstarter.   In order for someone in upper mgt to have faith in what I have to offer I’m not likely to persuade them to take the risk that I’ll mess up their operation, or that I really can fix their problem, without some relevant facts like what the heck KM is—by implication, what it can offer.   I agree that a simple tactical intervention (e.g., “projects” as you have described) involving lower line managers, which resonates intuitively with their knowledge of management principles, often can be implemented w/out much communication re theory.

 

Joe3: I didn't say that KM couldn't be described. I just said that is conceived and described in different ways. Then I said that practitioners can and should describe their way and design programs and projects around their own conception. I don't know why you think this would bother people trained in the scientific method. I suggest that it would only bother them if their training in science was deficient and overly authoritarian.

 

Here I'm reminded of a short quote from the opening of the 1956 preface to Karl Popper's book, "Realism and the Aim of Science (published in 1982).  It is:

 

"As a rule, I begin my lectures on Scientific Method by telling my students that scientific method doesn't exist. I add that I ought to know, having been, for a time at least, the one and only professor of this non-existent subject within the British Commonwealth."

 

Many think that Popper is the leading Philosopher of Science of the 20th century and the only one that an appreciable number of scientists read. The first claim in this statement is certainly arguable, but that Popper is somewhere in the top 10 is probably not. The second claim in the sentence is a fact as as we know right now. And both claims suggest that those who believe that scientific results have been valuable should think deeply about the quote from Popper I've given above.He says some other wise things about science in that preface, which you can access here:

 

http://www.amazon.com/Realism-Aim-Science-Postscript-Scientific/dp/0415084008/ref=sr_1_1?ie=UTF8&s=books&qid=1243708948&sr=1-1#reader

 

 

Karen: I’m attaching a 7-year-old document prepared by FAA (Bob Turner) that aimed to help gov’t personnel get started in KM.  Bob featured one definition of KM, and  listed 5 other definitions to illustrate diversity of thinking.  Narrowing it to 6 is good for starters!  But I like his approach (for practitioners) of landing on one, but recognizing others that highlight different characteristics.

 

Joe2: I do too. But I'd make clear that even the 6 definitions do no more than scratch the surface and that a recent survey uncovered no less than 62 definitions (See http://blog.simslearningconnections.com/?p=279). Stephen Bounds who has contributed here has done a nice analysis of these here: http://wiki.guruj.net/Holistic%21KM+Definition+Evaluation And I've commented on both here: http://kmci.org/alllifeisproblemsolving/archives/km-20-and-knowledge-management-part-21-knowledge-management-a-single-meme-multiple-meanings-and-a-very-heterogeneous-movement/ and in other places.

 

Karen3:  this issue probably merits more time than I can invest now, but I’ll offer a couple of thoughts.  What’s the problem here…are we trying to find the be-all/do-all definition that explicitly includes all the nuances of supporting theories??  Why can’t we simply talk to a few key elements:  (1) it’s deliberate (totally natural does not work); (2) it targets knowledge (although, in fact, it often targets competencies); and (3) it furthers individual and organizational excellence.   The “distinctions” can be used to bring home the WIIFM to the folks we’re talking to.

 

Joe 3: In fact, I'm advocating that we do something like that and that eventually after some years of this and good impact evaluations, a few and perhaps eventually one, valuable definition will emerge. This is my thesis in the "On Doing Knowledge Management" article.

 

 

Karen: One of the core values of KM practitioners is an aversion to reinventing the wheel.  We are natural copy cats because we don’t want to waste energy creating something that’s already been created.  Lord knows, we have plenty of challenges in our jobs and are quite willing to beg, borrow, or steal someone else’s good ideas.

 

Joe2: But what happens when our need to copy overwhelms our need to critically evaluate those ideas to see whether they're really "good." I've observed a strong tendency in KM to borrow what is perceived to be "correct," or in line with a current fad, and also to avoid critical evaluation and to focus instead on "positive thinking," the most up-to-date version of which seems to be "appreciative inquiry." This sort of approach seems very functional for thriving in social networks and also hierarchies; but I'm not sure it helps very much in helping people to get closer to the truth. A value which the past 8 years should, once again, have taught us is of over-riding importance.

 

Karen3:  I agree; sometimes we are a bit lazy, when it comes to evaluation…or perhaps we have a higher tolerance for risk, and are ok with an experiment.  Clearly, “tools” need to be applied with some thought to what needs to be fixed, and how.

 

Joe3: Yes.

 

 

Karen: Perhaps because my roots are in the IM/IT world, I value standards for integration and interoperability.  Tell me—when your colleague mentions that she’s starting a CoP in a particular knowledge domain, wouldn’t it be nice not to have to ask, “do you mean a web site, or a group of people??”  Agreement on the meaning of concepts aids interoperability.

 

Joe2: Sure agreement is great and it really helps interoperability, but the thing about interoperability is that the agreed upon terms and language have to refer to concepts that work for us. If the agreed upon concepts are not useful, our earlier agreement may just have led to platonifying (a Taleb word) a point of view that leads to error. Richard Vines and I have written a recent paper on XML Interoperability here:

http://kmci.org/alllifeisproblemsolving/archives/interoperability-and-human-interpretative-intelligence/

 

Karen3:  Concepts that work for us?  I agree, and can’t imagine that not happening—unless the CKOs are forced, somehow, to agree on lowest common denominator.

 

Joe3: Which is what may happen if they force themselves to agree on standards.

 

 

Best,

 

 

Joe



Karen Danis

unread,
Jun 12, 2009, 7:10:35 PM6/12/09
to fe...@googlegroups.com

Glad you were able to follow up with APQC, Joe.

 

APQC does not claim to be “bleeding edge” in its recommendations.  I appreciated that as a gov’t CKO---I did not have the luxury of being a test bed for experimentation. 

 

Instead, I needed to see what was working in other organizations (gov’t and industry) and determine if it was likely that my organization possessed those same conditions for success.  Then I might pilot the practice first—again, a conservative approach.

 

IMHO, if Agencies were to replicate that approach under skilled and well-connected CKOs, they could really make traction.

Joe Firestone

unread,
Jun 12, 2009, 7:45:11 PM6/12/09
to fe...@googlegroups.com
Thanks, Karen,

I'm in agreement with "piloting" solutions that have worked before under similar circumstances. That only makes good sense.

However, knowledge processing problems that really need to be solved will occur in the Government. If solutions aren't available in the private, or in the Governmental sectors, they will have to be created, "bleeding edge" or not. We just don't live in a world where KM is a developed science, or if you prefer branch of engineering, that can provide answers to most of the knowledge processing problems one will see from day-to-day.

If FedKM avoids such problems unless private sector solutions are available, then FedKM will be continuously behind the private sector in KM excellence, because such an approach would leave KM innovation to that sector. More importantly, KM is not very successful in the private sector. Why borrow private sector practices that, while they purport to be "best," have never had their impact validated using good measurement models and methodologies work with measures that can't easily be gamed?

Richard Vines

unread,
Jun 12, 2009, 8:41:13 PM6/12/09
to fe...@googlegroups.com

Hi Joe and Karen,

 

My intuitive sense is that you touching upon a subject that is fundamental to whether the notion of KM will ever get traction at all within Government (and implied in this for me is the notion of Govt facilitated networked approaches to collective intelligences as all Governments cross multiple boundaries of jurisdiction and reach). The important issues are not necessarily to do with good practice, benchmarking, monitoring quality processes etc, although I have no doubt, and with an appropriate attention to contextual considerations and variation** these can help.

 

From my work and reflections upon this, the real debate that needs to be had is about the systematic capture of evidence and the linking of evidence to theory frameworks. The practical and contextual consideration of “what works here and why”? Evidence is used for a whole range of reasons, including  for example adequately identifying and describing a problem, for selecting amongst a range of intervention options available to address pressing problems, and more widely, the disruption of routine patterns of behaviour and interactions in order to integrate solutions across complex open networks that make up the legislative, program, jurisdictional or other types of reach.

 

When I suggest this, I am not talking about static approaches to this, but what is possible through mass collaborations of evidence monitoring if sensible (and relatively simple!) approaches to “interoperability” can be made to become functional and effective. This is not an argument for “evidence based” decision making – rather an argument for “evidence informed” decision making.

 

This type of approach strengthens the notion that a KM movement is both an art and a science. Science, in the sense we are continuously open to testing our claims (including through appropriately conceived network structures and governance systems). Art in the sense we rely deeply upon all the very best of our human interpretative insights to expand our thinking, our minds and our cultures. A KM “movement” in the sense we are opening up our interpretative intelligence capacities to possibilities of collective intelligences through distributed systems and monitoring of emergent patterns.

 

By the way, there is nothing idealized about this sort of thinking – a common complaint from our complex systems thinking people. My sense is that the things we really need to be getting our heads around are notions of what constitutes the nature of evidence in relation to our theories, how are these notions to be reconciled with different (and pervading) paradigmatic ways of thinking such as relativism, constructivism, realism (including the various shades of these ways of thinking) and - in our program and project management networks, how do we establish multiple layers of sensing to continuously test and adjust our claims to ensure we are drawing upon notions of both personal and objective knowledge.

 

It just strikes me that the quality of debates are still pretty low level. I suggest that this is in part the case because of the fragmenting impact of different views about the very nature of sensemaking and evidence monitoring.

 

My two bobs worth for the moment.

 

 

 

Richard

 

** These things are useful to the extent that should not be Universlised”.

 

 

 

 

 


From: fe...@googlegroups.com [mailto:fe...@googlegroups.com] On Behalf Of Joe Firestone
Sent: Saturday, June 13, 2009 9:45 AM
To: fe...@googlegroups.com
Subject: [FedKM:409] Re: APQC [was RE: [FedKM:407] Re: KM Standards? Knowledge architecture?

 

Thanks, Karen,

I'm in agreement with "piloting" solutions that have worked before under similar circumstances. That only makes good sense.

However, knowledge processing problems that really need to be solved will occur in the Government. If solutions aren't available in the private, or in the Governmental sectors, they will have to be created, "bleeding edge" or not. We just don't live in a world where KM is a developed science, or if you prefer branch of engineering, that can provide answers to most of the knowledge processing problems one will see from day-to-day.

If FedKM avoids such problems unless private sector solutions are available, then FedKM will be continuously behind the private sector in KM excellence, because such an approach would leave KM innovation to that sector. More importantly, KM is not very successful in the private sector. Why borrow private sector practices that, while they purport to be "best," have never had their impact validated using good measurement models and methodologies work with measures that can't easily be gamed?

Best,


Joe






----- Original Message -----
From: "Karen Danis" <gkd...@comcast.net>
To: fe...@googlegroups.com
Sent: Friday, June 12, 2009 7:10:35 PM GMT -05:00 US/Canada Eastern
Subject: [FedKM:408] APQC [was RE: [FedKM:407] Re: KM Standards?  Knowledge architecture?


Glad you were able to follow up with APQC, Joe.

 

APQC does not claim to be “bleeding edge” in its recommendations.  I appreciated that as a gov’t CKO---I did not have the luxury of being a test bed for experimentation. 

 

Instead, I needed to see what was working in other organizations (gov’t and industry) and determine if it was likely that my organization possessed those same conditions for success.  Then I might pilot the practice first—again, a conservative approach.

 

IMHO, if Agencies were to replicate that approach under skilled and well-connected CKOs, they could really make traction.

 

                                Karen

 



I am using the Free version of SPAMfighter.
We are a community of 6 million users fighting spam.
SPAMfighter has removed 281 of my spam emails to date.
The Professional version does not have this message.

Joe Firestone

unread,
Jun 12, 2009, 9:21:19 PM6/12/09
to fe...@googlegroups.com
Richard,

I very much agree. In fact, the linking of evidence to theory frameworks is another way of talking about the activities of measurement modeling. We don't talk very much about that in KM and I think that's one of our biggest problems as I've said from time-to-time in blog posts..

Karen Danis

unread,
Jun 13, 2009, 12:01:39 AM6/13/09
to fe...@googlegroups.com

Richard,

 

I agree that it is useful to seek “evidence informed” decision making, and I’m not sure if you’re recommending that practice as the KM lead decides what to implement, or as the KM lead examines the knowledge flow that serves as input to corporate decision making, or something else.

 

If it is the first, who is the target audience?  Probably the KM lead.  I reported to an engineer who had little patience for “KM mumbo jumbo.”  He was more persuaded by comparisons with other organizations who successfully used the intervention I recommended—and the closer to our line of work, the better.  And if he didn’t endorse it, it didn’t happen.

 

If it is the second, perhaps you can offer an example of how that would apply.  I’m assuming that you’re talking about a novel approach—not the standard for decision support.

 

If the last, then perhaps you can simplify.  I believe that most of the folks who are foundational to this group have theoretical grounding in other areas of expertise.  We’re here because we want to help the US Fed’l gov’t gain the benefits available from KM, and most of us have had experience with at least some aspect of KM.   

 

 

                Karen

 

From: fe...@googlegroups.com [mailto:fe...@googlegroups.com] On Behalf Of Richard Vines


Sent: Friday, June 12, 2009 5:41 PM
To: fe...@googlegroups.com

Neil Olonoff

unread,
Jun 13, 2009, 2:09:26 AM6/13/09
to fe...@googlegroups.com
Richard,

Interesting post.

Your use of the term "evidence" and your depiction of the
enterprise-wide sensing mechanism as both science and art creates two
useful new metaphors for thinking about organizational knowledge.

I think your science and art "sensing mechanism" sounds a bit like the
sensing of an organism, even a person. It's quite an expansive vision.
But would you care to fill in a bit of the detail? How would that
work, exactly?

Regards,

Neil

Neil Olonoff olo...@gmail.com
Lead, Federal Knowledge Management Initiative,
Federal KM Working Group hosted at http://KM.gov
Office: 703.614.5058 (US Army HQDA, G-4/Contracted by Innolog)
Mobile: 703.283.4157 (Disabled during working hours)
Personal profile: http://www.linkedin.com/in/olonoff
Blogging at http://FedKM.org

Richard Vines

unread,
Jun 13, 2009, 3:10:49 AM6/13/09
to fe...@googlegroups.com

Hi Karen,

 

You seemed to be asking who is the target audience of my post? The target audience is anyone else who is doing their own sense-making journey of KM and knowledge and can afford to spend the time reading a slightly lengthy post. And more importantly, anyone who actively participates in network structures to support some form of adaptive change.

 

Was the target of my post about a KM Lead, or the nature of decision support systems? I think both.

 

By implication, … I do not think about KM as a particular type of job specification function in a hierarchical network (i.e. a KM Lead).  Perhaps I should, but I think many KMers suffer the challenges of either doing what they think KM might be - whilst being called something else … or being called a KM manager but being subjugated in a hierarchy to what others think about KM. I think you shared your own experience about the latter.  

 

In my post, I was more thinking “free form” and asking the question to myself … how do “Government’s” (or any other type of entity) structure interventions to solve large or small scale problems? And for example, how does “a Government” (whatever type, including at the level of division within a Dept) structure an intervention (for example, establishing a program) to address pressing problems etc?

 

Perhaps you might think this type of questioning is too diffuse – by talking about … “Governments” …. in this abstract way. However, I don’t think so. When we work in networks, we aim to influence the state of those networks and thus, I am not placing accountability away from myself by writing in this way. In fact, quite the reverse. ….. that government of the people, by the people, for the people … [shall not perish from the earth].

 

In establishing an intervention I notice “Govts” often attempt to facilitate risk management frameworks through, for example, quality compliance interventions / accreditation systems.  Or regulation interventions might be conceived to prevent the emergence of excessive types of commercial behaviour etc et. Also increasingly Govts want perceived accountabilities, so there is increasing emphasis on evaluation frameworks etc etc. etc. I just happen to think there is potential to be more systematic about how we think about these types of interventions – and that this is where a significant possibility of “KM” lies.

 

So, for example, all of these types of interventions rely on some sort of theoretical framework which in turn relies on some notion of “evidence”. By engaging with KM as a “way of thinking” I was suggesting there is a possibility of influencing the design of any intervention and the way these are conceived implemented and monitored. Is this the role of a KM Lead? Well – I hope so – but this “way of thinking” is certainly not restricted to a “KM lead”. What it involves is thinking about serving networks more effectively. Part of this I think involves getting clearer about what is meant by evidence, what types of evidence are relevant, how do we collect and collate this evidence systematically with the view that we can identify emergent patterns of behaviour? That is … building the quality of the evidence base. And, beyond this, how do we think about governance and decision support systems to mediate important boundary distinctions – for example between Government personnel and citizens (as one example).

 

[In my post, I was also being sensitive to the realities that there are heated views in the KM space about the boundaries between personal knowledge and objective knowledge, of sensemaking and the mechanics and value of explicit knowledge, of traditional notions of publishing pathways and web 2.0 initiatives including social computing and blogging].

 

So … for myself, I am hopeful KM can provide a pathway to facilitate very different ways of generating and monitoring public policy frameworks. But these novel approaches are taking time to establish and bed down as are general agreements about the very nature of KM. Don’t get me wrong by the way, I sincerely hope you are successful in establishing some sort of frame work for embedding most systematic ways of thinking about KM in the US - beyond what is currently the case. I have noticed a trend in this direction in Australia, but not necessarily, yet, under an network of ideas associated with what we might agree to call KM.

 

Cheers,

 

 

Richard

 

Neil, I will reply to your question as soon as I get some clear space of time.

 

 

 

 

 


Neil Olonoff

unread,
Jun 13, 2009, 8:00:51 AM6/13/09
to fe...@googlegroups.com
Richard,

This enterprise level "evidence collecting" model is a very large
vision. When you get around to responding to my previous question, I
wonder if you would describe it also in a smaller scope. (perhaps a
"pilot" case).

Thanks,

Neil

Neil Olonoff olo...@gmail.com
Lead, Federal Knowledge Management Initiative,
Federal KM Working Group hosted at http://KM.gov
Office: 703.614.5058 (US Army HQDA, G-4/Contracted by Innolog)
Mobile: 703.283.4157 (Disabled during working hours)
Personal profile: http://www.linkedin.com/in/olonoff
Blogging at http://FedKM.org



Joe Firestone

unread,
Jun 13, 2009, 11:01:27 AM6/13/09
to fe...@googlegroups.com
Richard, Neil, and Karen,

As I've indicated in previous posts, I have a long-standing concern with questions of measurement and "evidence," as well. And even though I haven't discussed this here from an organizational systems viewpoint, I have thought and written about it in the context of work on Balanced Scorecards. Here's a background paper on problems that have developed over time with the balanced scorecard approach.

http://www.adaptivemetricscenter.com/media/BSCdevelopmentsandchallenges.pdf

and a Cutter Consortium Report on Adaptive Scorecards and Adaptive Scorecard Maturity Models downloadable with the compliments of Cutter and myself.

http://www.cutter.com/offers/adaptivescore.html

The second paper addresses the problem of fixing the learning and growth perspective in the traditional balanced scorecard. That perspective is the closest thing in traditional BSC thinking to a Knowledge Management/knowledge processing/organizational learning focus, but the BSC has never developed this part of the balanced scorecard in only the most superficial way.

I wrote the Cutter Report, in part, to address this problem, and it discusses aspects of developing both KM and knowledge processing measurement models and indicators. It also incorporates a sustainability/externalimpact/environmental perpective, and integrates measures of openness and transparency in the context of an orientation focused on creating open enterprises. If the Federal Government moves strongly towards the Balanced Scorecard in the coming years, and also implements a National KM Center, one area of close collaboration between the Federal CKO and the Chief Performance Officer might involve developing mature adaptive scorecards for the Federal Government.

Richard, how is this thinking related to the concerns you expressed in your post.


Best,


Joe


----- Original Message -----
From: "Neil Olonoff" <olo...@gmail.com>
To: fe...@googlegroups.com
Sent: Saturday, June 13, 2009 2:09:26 AM GMT -05:00 US/Canada Eastern
Subject: [FedKM:413] Re: APQC [was RE: [FedKM:407] Re: KM Standards?  Knowledge architecture?


Richard,

Interesting post.

Your use of the term "evidence" and your depiction of the
enterprise-wide sensing mechanism as both science and art creates two
useful new metaphors for thinking about organizational knowledge.

I think your science and art "sensing mechanism" sounds a bit like the
sensing of an organism, even a person. It's quite an expansive vision.
But would you care to fill in a bit of the detail? How would that
work, exactly?

Regards,

Neil

Neil Olonoff   olo...@gmail.com
Lead, Federal Knowledge Management Initiative,
Federal KM Working Group hosted at  http://KM.gov
Office:  703.614.5058 (US Army HQDA, G-4/Contracted by Innolog)
Mobile: 703.283.4157 (Disabled during working hours)
Personal profile: http://www.linkedin.com/in/olonoff
Blogging at http://FedKM.org



Richard Vines

unread,
Jun 14, 2009, 7:08:41 AM6/14/09
to fe...@googlegroups.com

Apologies for the delayed response Neil – and for the slightly rushed response.

 

You asked me to fill out a bit more detail about this [in your words]

 

I think your science and art "sensing mechanism" sounds a bit like the sensing of an organism, even a person.

 

Before I comment, just a reflection on language – hope you don’t mind me being pedantic.

 

I did not talk about an “enterprise wide sensing mechanism” per se. I actually said: in our program and project management networks, how do we establish multiple layers of sensing to continuously test and adjust our claims to ensure we are drawing upon notions of both personal and objective knowledge?

 

Just to state why I posted in the first place. I was aiming to emphasise a thought that KM should aim to play a role beyond what we currently know and think (for example, encompassing knowledge sharing activities like benchmarking and “good practice” etc). I raised the topic of evidence because it is central to the idea of evidence informed decision making. And that the notion of evidence is closely associated with rationales for action … and problem solving.

 

I was in a sense suggesting that KM could play a role in increasing the quality of evidence, including in part through mass collaborations around “evidence monitoring”. This to me is a substantial challenge and would provide a valuable contribution. Why do this at al …. Well, I think KM has something to contribute in terms of increasing the velocity and quality of the ways in which problems are addressed.

 

You asked me “exactly how that would work”. Well, I (with a few others) am thinking through the foundation for this through right at the moment. I am not able to describe what is new about this through a short post, partly because an emergent approach aims to be integrative of a range of different domains and a simple description, I am sure would result in misunderstandings. However, I will aim to make anything available when I am in a position to do so.

 

Perhaps the only thing I would add for the moment is the reference to “multiple layers” of sensing.

 

Apart from the idea that metadata can be captured and monitored that is reflective of multiple levels of hierarchy and context, I am also thinking here about the many different types of evidence that can be relevant to any decision making context. Anecdotes, data, internal reports, external reports, books, journal articles etc etc. An important aspect of this thinking relates to capturing metadata etc that is inclusive of emergent approaches to contextual information management. So, when I talk of evidence, I am specifically inclusive of inclusive of what happens in the context of the everyday over time and not specifically referring to projected notions of “expert knowledge” (without in any way dismissing the importance of this!).

 

A stream of thinking that has relevance to what I am referring to is discussed in this article if it is of any vague interest to anyone.

 

Mapping the Socio-Technical Complexity of Australian Science: From Archival Authorities to Networks of Contextual Information  

http://www.informaworld.com/smpp/content~content=a903801003~db=all~jumptype=rss

 

I think this notion of contextual information management is an emergent approach that holds much potential in a whole range of ways. It is really the topic of a lengthy workshop / informal discussion (and by the way, I don’t want to give the impression of talking to the converted – this might be hold hat to you anyway).

 

I know I have not fully responded to you request … yet. However, I hope this is of some assistance.

 

Cheers,

 

 

Richard

 

 

 

 

-----Original Message-----
From: fe...@googlegroups.com [mailto:fe...@googlegroups.com] On Behalf Of Neil Olonoff
Sent: Saturday, June 13, 2009 4:09 PM
To: fe...@googlegroups.com

Subject: [FedKM:413] Re: APQC [was RE: [FedKM:407] Re: KM Standards? Knowledge architecture?

 

 

Richard,

 

Interesting post.

 

Your use of the term "evidence" and your depiction of the

enterprise-wide sensing mechanism as both science and art creates two

useful new metaphors for thinking about organizational knowledge.

 

I think your science and art "sensing mechanism" sounds a bit like the

sensing of an organism, even a person. It's quite an expansive vision.

But would you care to fill in a bit of the detail? How would that

work, exactly?

 

Regards,

 

Neil

 

Neil Olonoff   olo...@gmail.com

Lead, Federal Knowledge Management Initiative,

Federal KM Working Group hosted at  http://KM.gov

Office:  703.614.5058 (US Army HQDA, G-4/Contracted by Innolog)

Mobile: 703.283.4157 (Disabled during working hours)

Personal profile: http://www.linkedin.com/in/olonoff

Blogging at http://FedKM.org

 


I am using the Free version of SPAMfighter.
We are a community of 6 million users fighting spam.
SPAMfighter has removed 283 of my spam emails to date.

Neil Olonoff

unread,
Jun 14, 2009, 8:40:11 AM6/14/09
to fe...@googlegroups.com
Hi, Richard,

Thanks for your response. I too am sensitive to the language we are
using. The term "evidence" calls up associations with the law and with
science. Seldom in organizations are we asked to provide evidence
(i.e., "proof") for our beliefs and decisions, except perhaps in the
most officious of circumstances.

It is the term "sensing" that would get the most acceptance where I
work -- in the US Army -- since the term is used in terms of achieving
situational awareness.

I reviewed the link and see that the application is in the nature of
an archival knowledge base. The need for "authoritative" data and
information sources certainly is congruent with the use of the term
"evidence." So I see now where you are coming from.

I look forward to seeing your further progress in this area.

Cheers,

Neil

Neil Olonoff olo...@gmail.com
Lead, Federal Knowledge Management Initiative,
Federal KM Working Group hosted at http://KM.gov
Office: 703.614.5058 (US Army HQDA, G-4/Contracted by Innolog)
Mobile: 703.283.4157 (Disabled during working hours)
Personal profile: http://www.linkedin.com/in/olonoff
Blogging at http://FedKM.org



Richard Vines

unread,
Jun 14, 2009, 5:26:54 PM6/14/09
to fe...@googlegroups.com
Thanks Neil,

Just a quick response in relation to the term "authoritative". Whilst it is of fundamental importance (and I agree with this - as the article I referred to pointed to), there is also an important qualification to this.

One of things I had originally indicated was the need to be inclusive of all kinds of evidence. Part of this I think requires thinking about different network states - how to integrate both "open networks" and hiearhical networks. The hierarchial aspects of this are complex. There is something important about the notion of time in relation to knowledge and evidence testing which leads to a natural order hierarchy (within an evolutionary framework). Dr Bill Hall from Australia has been doing some interesting thinking about this point.

So, I only wish to emphasise that I think we have to think about evidence both in a hierarchical sense as well as an open network sense. Context is king. This is an area that I think would be really helpful to open up in a range of ways.

I have noted with interest that some of the records management world are beginning to think how to integrate the principle of "reflexivity" into their domain of practice.

I wish I had more time right now to tease out the tentacles of this point. However, I don't.

Thanks for the interchange.
--
I am using the free version of SPAMfighter.
We are a community of 6 million users fighting spam.
SPAMfighter has removed 283 of my spam emails to date.
Get the free SPAMfighter here: http://www.spamfighter.com/len

Richard Vines

unread,
Jun 14, 2009, 5:28:19 PM6/14/09
to fe...@googlegroups.com

Hi Joe

 

Re the last line below. A few reflections … 

 

To be frank, I get nervous, even with the word “measurement”, I shouldn’t, I know. It seems to me that notions of “measurement” are often introduced without any serious insight into what such introductions impact different service systems. Too many horror stories to tell about this.

 

Re the adaptive score card – I agree with the need for the BSC to be reconceived and I am sure you are right about this particular perspective: Learning and Growth.

 

From my observation, what is most useful about the use of score cards, is the potential to engage Boards in knowledge creation journeys. That is, metrics have the potential to be negotiated over time – drawing upon the impact of new knowledge (including failures) – and including the critiquing of the theories underpinning interventions.

 

Somewhat related to this, I have been heartened recently to hear people talk about the need for new types of service level agreements / contracts that are more flexible in nature that embed the principle of reflexivity. So, we are beginning to see possibilities for constructive partnership between funders and fundees. Possibly Boards and Executives as well.

 

The issue of evidence, I think is fundamental to all these considerations. It is an area which needs much more considered thinking.

 

Not sure if I have adequately answered your question.

 

 

Richard

Joe Firestone

unread,
Jun 14, 2009, 7:42:03 PM6/14/09
to fe...@googlegroups.com
Richard,

Thanks. You have,


Best,


Joe
----- Original Message -----

Karen Danis

unread,
Jun 15, 2009, 2:10:25 PM6/15/09
to fe...@googlegroups.com

Richard,

 

Thank you for elaborating; I do have a better understanding.  From an operational standpoint, “evidence gathering” might apply to identifying the root causes for failing an operational inspection every other year.  Within KM, it might pertain to evaluating the health of a CoP.

 

These analyses are not restricted to KM leads; indeed, with the former, the senior operational leads (line managers) take great interest in understanding and solving the problem.  And the degree to which they involve the KM lead is indicative of the trust they have in KM interventions and the types of thinking we bring to the table.

 

I’m not sure why you are reluctant to recognize the legitimate role of a KM lead in these explorations.  I’ve acknowledged that the work is not restricted to KM leads---perhaps that was your primary point. 

 

I agree that KMers suffer challenges as you have described.  And we contribute to that issue, ourselves, when we refuse to define terms, roles, and responsibilities.  As Voltaire said, “The best is the enemy of the good.”

 

For our Federal KM Initiative to be successful, I do believe we’ll have to overcome that obstacle.

Richard Vines

unread,
Jun 15, 2009, 5:07:10 PM6/15/09
to fe...@googlegroups.com

Hi Karen,

 

Thanks for this feedback. I get the gist of this and I would be in agreement ….a few tangential thoughts below taken into account. It is seems to me that the transforming impact on [your words – operational line managers] occurs by osmosis in part to do with “new ways of thinking”.

 

Reluctant to recognize the legitimate role of the KM Lead?

 

I don’t necessarily think I was expressing reluctance in the idea of a legitimate role to do interventions. It is more the basis of the interventions. Roles that lead to renewal across and through multiple levels of hierarchy (top down, bottom up, middle outwards, etc etc) and that permeate out across networks are valid. It is just the language of “lead”. It sounds very centrist to me.

 

I am a supporter of knowledge interventions that are both centralized and decentralized. The only reluctance I have even using this sort of language in the first place, is that this way of thinking about KM as a science and an art can be easily misconstrued in the heat of hierarchical systems, and even in justification campaigns when there is a search for legitimacy for what might be called a “KM lead” type intervention.

 

At base level, this is why I think it is important to have some sense of what KM is about. For my part, I err on the side of “realism” and I am still working through what shades of realism I think are necessary to deal with this notion of handling all types of “evidence” in any shift towards an evidence informed decision making culture.

 

Neil – I think you are aware of the paper I wrote with Bill Hall and Luke Naismith: Exploring the Foundations of Organisational Knowledge: A synthesis grounded in evolutionary biology on the KMCI website. This paper, is really part of (and I would hasten to add – a start of a sensemaking journey about the exploring the foundations of KM as a science. Part of the origins of this paper arose because of my own journey of trying to understand the place of:

·         print based infrastructure (and print rendered reading (and learning) material, versus other types of rendered content) in large complex enterprises in this era of “digital media convergence”

·         how are the notions of traditional notions of “publishing workflows” (including notions of authority) relevant to large complex enterprises when our knowledge industries are migrating away from traditional analogue workflows (based on our 500 odd years of analogue publishing) towards fully integrated digital workflows, with new and emergent authority structures based on new types of peer review and critiquing.

 

I hope others might add there views about your question. I am very interested in this topic.

 

Cheers,

 

 

Richard

SPAMfighter has removed 283 of my spam emails to date.

Karen Danis

unread,
Jun 15, 2009, 6:00:35 PM6/15/09
to fe...@googlegroups.com

Richard,

 

Centrist?  Simply because someone has a role and function as a KM lead….??   Do you think that that implies that only the KM lead can, say, start an informal CoP?  (Or it this merely about the wording of the job title…??)

 

My experience stems from the role of KM lead in an organization that had very little “organizational fat.”  Thus, folks had little time/attention available for pursuing approaches outside their particular disciplines.  The more enlightened would conduct an After Action Review or Peer Assist, or create an informal CoP--if they knew how--and part of my job was to teach them.  I, too, agree with centralized and decentralized  KM interventions—to which a KM lead is essential.

 

First I had to have the authority to act.  And that meant having the recognized knowledge and positioning in the organization--which meant, some degree of formality in my job.  (That’s “realism” in my book.)

 

Yes, there was hierarchy and responsible structure, common to ventures in which, ultimately,  people’s lives are at stake.  It had its predictable difficulties with silos, and the org leadership were perpetually tinkering with the structure to help us achieve our mission—perhaps more dynamic than most organizations in Fed’l gov’t.

 

If KM has its own knowledge domain, and there are certain principles that affect success in applying KM interventions, I don’t see how someone can quibble with the title; in most organizations that’s simply responsible structuring.

 

 

Forgive me if I’ve rambled too much based on my questions….

Richard Vines

unread,
Jun 16, 2009, 4:51:18 AM6/16/09
to fe...@googlegroups.com

Hi Karen,

 

At one level, I would ideally like to agree with you. A KM lead (based on a domain of practice) as a function with authority to act. With that authority to act comes the ability to make interventions such as “teaching” busy managers about stuff they don’t time and space to do in the line management responsibilities, including establishing a community of practice.  Act from a stand point of KM as a discrete domain and go for it. [Realism? By the way – whilst not important, it is not the sort of I was talking about – I was more talking about this sort of philosophical / scientific realism - http://en.wikipedia.org/wiki/Philosophical_realism].

 

My problem with this view is that it is still dependent upon the nature of the space created by the hierarchy which provides the authority to act (translated probably into a job spec, or position description). Invariably this becomes problematic, because of the divergent nature of how people perceive the knowledge function and the impact of complexity –where emergent properties of the system act to constrain components of the system. I wonder what your experience was over time?

 

The difference I suppose is that I think emergence occurs via network stimulation and conversations about what is possible. Self organization and emergence develops through time and new knowledge is created through time. Authority is not vested from the hierarchy, but through enrollment and joined up conversations. The focus is on the edges.

 

On the other hand, I think both of our views are probably reconcilable in that these two types of thinking can be (usually are / should be) in creative tension in any organisation / network.  Part of context analysis is to intelligently monitor system dynamics and facilitate interventions that lead to healthy adaption that in part might be based on these dynamics.

 

There is also another view in that I see externalities (healthy forms of regulation) as providing possibilities interventions to keep systems open and outward looking. These types of externalities can be intelligently amplified.

 

Apologies about the abstract language … I have to keep these conversations ‘deconstextualised’

Stephen Bounds

unread,
Jun 16, 2009, 6:41:22 AM6/16/09
to fe...@googlegroups.com
Hi Richard & Karen,

Great conversation. But to take a page out of Joe's book, we need to be
clear about the difference between knowledge processing and knowledge
management.

- the *use* of a CoP is Knowledge Processing
- it is the *decision* to set up the CoP (and associated implementation
and oversight) that is Knowledge Management

In this context, I don't see any difference between "KM lead" and "KM
expert". Presumably, Karen was hired to exercise this expertise by:

(a) identifying suitable opportunities to implement things such as CoPs
(b) using her experiences of what makes CoPs likely to work to assist in
the implementation process

This is not "centrist" any more than a gardener centralises the
"growing" of a garden by pruning plants or adding a trellis.

In other words, Knowledge Processing is the domain of emergent activity.
Knowledge Management interventions are typically quite deliberate.

Cheers,

-- Stephen.

Karen Danis

unread,
Jun 16, 2009, 12:51:41 PM6/16/09
to fe...@googlegroups.com

Richard,

 

Have to keep the conversations decontextualised?  Truly, this makes understanding difficult.  I, like most of the folks in the group (I suspect), am a practitioner at heart; as I’ve said in my profile, too much theory makes my hair hurt—the follicles are objecting as we speak!

 

But I do think I understand what you’re saying.  This sort of discussion would profit from a face-to-face encounter—far easier to clear up questions!

 

Now, to move to your problem with the view you described below:  in truth, these are the “givens” with which we have to work; we have little influence on that, being mere mortals.  Sure, we work with the corporate strategists and the opinion leaders to help them see “what is possible”, but in the end they make the decisions.

 

My experience over time?  I’ll try to summarize what was, for me, the most bizarre experience of my two parallel careers—and the most educational.  From the start my political backing was tenuous; senior leadership was not united in its support for the top guy, who moved me 3000 miles to be the KM lead.  And I was pretty green as a KMer.  Then, not long after taking the job my expertise life line was cut.  I did not have a solid enough connection with the KMWG to use its members as resources; and KM was still pretty “young” in Fed’l gov’t, so few internal resources existed.

 

Management had a reputation for trying a new type of intervention every 18 months or so…thus, passive resistance and an impossible time window for demonstrating success with the pilot we had chosen (in hindsight, much more complex than initially viewed).  Opportunities for advancement were slim, so some of my internal supporters hoped to hitch their wagons to my star…so their backing was contingent on my political standing.  I did have a core of true believers, but after my immediate supervisor and his boss (the one who hired me) left, my tenuous support dried up and my core had to go underground.

 

The organization, as it appears, had deep roots (provincial) that made it hard for new senior managers to gain a foothold.  And the person to whom I reported had a hard time articulating the network dynamics—he could not give me a sense of the environment with enough clarity that potential courses of action could be intelligently evaluated.  Regardless,  I tried a number of approaches—had to learn by doing.

 

I did experience successes with structured interviews and communities of practice.   But after 5 years I retired; another (and expensive) management intervention had become firmly rooted, and my resources were gone.

 

…all of this after having experienced tremendous success under solid political backing in my prior work for the Dept of Navy CIO and CKO.

 

It was a pressure cooker.  But pressure  cookers are known to do their jobs quickly—in my case the learning was compressed into a relatively short timeframe.

 

 

It’s great to talk about intelligently monitoring system dynamics, but one has to be able to SEE what one is monitoring!  That view is always occluded to some degree.

 

 

I’ve been pretty candid here—kind of like donating my body to further medical science.  I feel “exposed”, yet do hope that others will benefit from my experience.

Neil Olonoff

unread,
Jun 16, 2009, 1:07:43 PM6/16/09
to fe...@googlegroups.com
Stephen,

I think the use of the term "knowledge processing" to characterize what happens in a Community of Practice is a sterile misnomer. It does not come even close to expressing the richness of potential growth and activity.

To me "processing" sounds like an industrial manipulation of heating or addition of chemicals. It sounds nothing like the organic give and take of real people engaging in conversation. There may be an appropriate use of the term knowledge processing, but I think the activity would be performed by a machine rather than a COP.

Regards,

Neil

Neil Olonoff   olo...@gmail.com
Lead, Federal Knowledge Management Initiative,
Federal KM Working Group hosted at  http://KM.gov
Office:  703.614.5058 (US Army HQDA, G-4/Contracted by Innolog)
Mobile: 703.283.4157 (Disabled during working hours)
Personal profile: http://www.linkedin.com/in/olonoff
Blogging at http://FedKM.org


Karen Danis

unread,
Jun 16, 2009, 1:10:24 PM6/16/09
to fe...@googlegroups.com
You and I are on the same page, Stephen...! (Gosh, I've really missed that
experience....your input is like a breath of fresh air. <smile>)

Karen

-----Original Message-----
From: fe...@googlegroups.com [mailto:fe...@googlegroups.com] On Behalf Of
Stephen Bounds
Sent: Tuesday, June 16, 2009 3:41 AM
To: fe...@googlegroups.com
Subject: [FedKM:426] Re: Conversational thinking [was Re: APQC [was RE:
[FedKM:407] Re: KM Standards? Knowledge architecture?


> for it. [Realism? By the way - whilst not important, it is not the sort
> of I was talking about - I was more talking about this sort of
> philosophical / scientific realism -
> http://en.wikipedia.org/wiki/Philosophical_realism].
>
> My problem with this view is that it is still dependent upon the nature
> of the space created by the hierarchy which provides the authority to
> act (translated probably into a job spec, or position description).
> Invariably this becomes problematic, because of the divergent nature of
> how people perceive the knowledge function and the impact of complexity
> -where emergent properties of the system act to constrain components of
> the system. I wonder what your experience was over time?
>
> The difference I suppose is that I think emergence occurs via network
> stimulation and conversations about what is possible. Self organization
> and emergence develops through time and new knowledge is created through
> time. Authority is not vested from the hierarchy, but through enrollment
> and joined up conversations. The focus is on the edges.
>
> On the other hand, I think both of our views are probably reconcilable
> in that these two types of thinking can be (usually are / should be) in
> creative tension in any organisation / network. Part of context
> analysis is to intelligently monitor system dynamics and facilitate
> interventions that lead to healthy adaption that in part might be based
> on these dynamics.
>
> There is also another view in that I see externalities (healthy forms of
> regulation) as providing possibilities interventions to keep systems
> open and outward looking. These types of externalities can be
> intelligently amplified.
>
> Apologies about the abstract language . I have to keep these
> conversations 'deconstextualised'
>
> Cheers,
>
> Richard



Karen Danis

unread,
Jun 16, 2009, 1:32:35 PM6/16/09
to fe...@googlegroups.com

Neil,

 

I did not interpret Stephen’s usage as a comprehensive description—simply identifying a key aspect of a healthy CoP that could be used to illustrate the difference between K mgt in action and K processing in action.

 

                Karen

 

From: fe...@googlegroups.com [mailto:fe...@googlegroups.com] On Behalf Of Neil Olonoff


Sent: Tuesday, June 16, 2009 10:08 AM
To: fe...@googlegroups.com

Joe Firestone

unread,
Jun 16, 2009, 4:00:01 PM6/16/09
to fe...@googlegroups.com
Neil and Stephen,

"Processing" evidently has a more mechanistic" connotation for you than it perhaps it does for Stephen, Richard, or myself. You know there are such things as biological, organic, and social processes. Having said that however, the name we give to the category of activities including:

-- seeking, recognizing, and formulation problems;
-- arriving at solutions; and
-- communicating those solutions

isn't a terribly significant issue, so if you want to suggest an alternative name, that's fine. The important point here is 1) that these activities are distinct from operational business processes and 2) are the activities that it is the function of KM to advance.

Moving on to the notion that such activities don't reflect the full richness of a CoP, I certainly agree. A CoP is a complex system. It has an adaptive subsystem which includes its KM and "knowledge processing" activities. But it also has lot of other activities it engages in that are about social integration rather than adaptation. "Knowledge processing" certainly is not a term intended to refelect everything done in Cops.


Best,


Joe
----- Original Message -----
From: "Neil Olonoff" <olo...@gmail.com>
To: fe...@googlegroups.com

Neil Olonoff

unread,
Jun 16, 2009, 4:42:53 PM6/16/09
to fe...@googlegroups.com
Joe,

About the phrase "knowledge processing:" It's not just that I don't like the word processing; I think the larger issue is that I am seeking a term that "opens" the context to what may occur during this activity to combine, transform and grow knowledge "within the process."  This, surely, must be part of knowledge processing, right?

But the phrase "knowledge processing" sounds at most starkly transformative. It does not at all connote an efflorescence, or growth of any kind. That's my problem with it. It seems to imply a machine age metaphor that binds and restrains an organic activity with words that, to use a synaesthetic allusion, are gray and smell of old paper.

Regards,

Neil



Regards,

Neil

Neil Olonoff   olo...@gmail.com
Lead, Federal Knowledge Management Initiative,
Federal KM Working Group hosted at  http://KM.gov
Office:  703.614.5058 (US Army HQDA, G-4/Contracted by Innolog)
Mobile: 703.283.4157 (Disabled during working hours)
Personal profile: http://www.linkedin.com/in/olonoff
Blogging at http://FedKM.org


Richard Vines

unread,
Jun 16, 2009, 4:44:40 PM6/16/09
to fe...@googlegroups.com

Thanks Karen and Stephen,

 

The authentic response Karen is really appreciated. You have highlighted something for me that appears to be an under-current about those that actually do the practitioner’s hard yards in this space of “knowledge”. My reference to “abstract / decontextualised” was not about hiding behind theory. More that my stories are stories I would tell when with the distance of time- like yours is. Probably, the onus is now on me to think about this choice I make in more detail.

 

Your post, I think implicitly highlights the sensitive and difficult nature of this work in complex systems. I reflect that “centrist’ might be an in-appropriate word … anyway, I sense the conversation has moved beyond that focus / issue now.

 

Much of the thought leadership about KM has emerged / is emerging from the consulting domain (I may or may not be right about this). My sense is that there is a whole genre (… and Neil … metaphor creation etc) that is emergent from those in organisational positions who are either trying to do KM whilst being called something else or those that are called KM practitioners and who are at the mercy of complexity itself. I wonder where KMers get their different types of [you mentioned the phrase expertise-life lines] life lines from?

 

So, the conversation has led to a sense of slight disorientation (for me) – and I will sit with this for a while. These disorientations are a great outcome from participation in types of discussion groups.

 

Like you both seem to have some intentionality about, I am also keenly interested in aspirations like giving KM a firmer foundation for us all … and to connect to things that ensure we make satisfying contributions. So, my sense of the notion of a “KM centre” in the US (or center) has been shifted somewhat. Thanks again.

 

Cheers for now,

SPAMfighter has removed 284 of my spam emails to date.

Joe Firestone

unread,
Jun 16, 2009, 5:04:58 PM6/16/09
to fe...@googlegroups.com
Neil,

I agree, and would simply repeat my call for suggestions about alternative more evocative terminology. But until we get something more evocative, we do need a category term signifying those various activities.


Best,


Joe

----- Original Message -----
Best,


Joe

Karen Danis

unread,
Jun 16, 2009, 5:46:31 PM6/16/09
to fe...@googlegroups.com

Richard,

 

Thanks for understanding, and your support for solutions…for a firmer foundation. 

 

To comment on your observation re “KM practitioners…at the mercy of complexity itself”:  the scenario I described is fairly extreme, to some extent a consequence of my unfamiliarity with the organization and the crude feedback system.  One of the logical “fixes” is to hire the CKO from within—so that s/he arrives with a good sense of the political climate and the organization’s culture and norms—and also ensure the CKO is fully qualified and enjoys [at least] decent political support.

 

I enjoy innovation and have a fairly high tolerance for risk.  This situation had many of the earmarks of innovation, with all of its “firsts” and unknowns.  It was a pioneering type of experience that I’m thankful to have survived—and learned from.

 

I believe most gov’t CKOs can avoid that path.  We should be able to create a much more harmonious experience for them—provided we acknowledge and build on what we have learned.

Richard Vines

unread,
Jun 16, 2009, 6:10:06 PM6/16/09
to fe...@googlegroups.com

Hi Karen,

 

Sounds a really great cultivation project to use Stephen’s metaphor. Growing conversational and reflective connections. I hope you are successful in generating a sense of space for deep listening and reflection. This would be a very significant achievement in amongst the absolute mayhem of government activity today associated with so many huge pressing problems.

 

We should be able to create a much more harmonious experience for them—provided we acknowledge and build on what we have learned.

 

A really interesting exchange thanks,

Reply all
Reply to author
Forward
0 new messages