VPEC-T - from analysis to design

Skip to first unread message

Richard Veryard

Oct 2, 2009, 6:26:34 PM10/2/09
I've just run a couple of Next Practice bootcamps featuring the VPEC-T
lens, and I thought I'd share the reaction I got from the

There was general acceptance of VPEC-T as a diagnostic lens - for
understanding problems, analysing requirements and identifying risks.
What the participants found more difficult was to see VPEC-T as a
problem-solving lens. It tells you what's going on (they said) but it
doesn't tell you what to do.

Now this may be something to do with me rather than VPEC-T; perhaps
I'm a lot better at diagnosis than prescription, so I am better at
advocating the use of VPEC-T for diagnosis.

What does anyone else think? Is VPEC-T equally useful for both
analysis and design? Or are there other lenses that are more suitable
for solving the problems identified and analysed by VPEC-T?


Oct 4, 2009, 5:17:39 AM10/4/09
VPEC-T was originally conceived as a tool to help business analysis
and to help with traceability into design and further still, to
operation. The P-E-C part came about first as a Design Pattern based
around federated information systems - Trading Partners in a Supply
Chain for example. It emerged from the design of a asynchronous
message-based parcel tracking solution and associated pub/sub pattern
we developed back in the '90s. I became fascinated by the difference
in thinking required to design and deploy such a solution compared to
more database-centric applications that were more popular at the time.
So, P-E-C has clear connections to design and this shows through in
the way that folk like Chris Bird and John Schelsinger are using VPEC-

The V and T parts were added when we Carl and I were working with the
CJS. We found they complemented the P-E-C part by highlighting aspects
that were often missed in business analysis. But we also found they
play into the solution - hence we talk about the concept of Adoption
Engineering which talks about the notion of 'Designing-in' 'V and T'.
This includes building trust through incrementalism and designing to
directly support different 'Value Systems' (e.g. different views and
attractors). This approach aligns with a Cynefin Complex-Adaptive
style of design and development, which also aligns with Agile software
development methods.

In summary, I find VPEC-T works best when it spans analysis and
design. I describe the solution I develop using the VPEC-T lens. To
get another PoV on this, Dave Hunt and John Schlesinger have
experience of joining-up analysis and design using VPEC-T - can I
suggest you contact them directly Richard?



Oct 4, 2009, 7:25:04 AM10/4/09

(1 hour 11 mins - worth spending if considering Next Practice IMO)

This video by John Holland PhD might help get nto how VPEC-T works in
the design space:
Complex Adaptive and #vpect work well together: V: Agent Goals/Values,
Policy: Light-constraints, Lever-points, Events: Adaptive behavior
(Agents communicate via signals). Trust is establised or not between
Agents and domain spanning-risk is catered for.

P-E-C also seem to help describe CAS building-blocks.


On Oct 4, 10:17 am, "nigelpsgr...@googlemail.com"

Richard Veryard

Oct 4, 2009, 11:47:52 AM10/4/09
Thanks Nigel.

I take the point about PEC being used as an IS design technique, but I
guess I was looking for something that "solved" the V and T as well as
the PEC issues.

I accept the relevance of Adoption Engineering, but I had seen this as
a separate lens rather than as a part of VPEC-T.

So I guess there are a couple of follow-up questions here. Does VPEC-T
have a broad applicability as a general diagnostic tool for complex
sociotechnical systems, and a deeper applicability as a design tool
within the specific IS domain? What counts as part of VPEC-T itself as
opposed to ancillary techniques?

John Schlesinger

Oct 4, 2009, 5:40:36 PM10/4/09
to vpe...@googlegroups.com
VPEC-T certainly fits with the way I have been doing design, at the  enterprise level, very well.

The essence is to consider the context diagram of something that you are going to deploy as an information system. The events are the business events (POST or PUT) that the IS will either absorb or emit.  The business events are relevant to both human interaction and to system to system interaction (that is, between one transaction and another). If the system is not using a transaction then it is acting as an agent for a human, not as an independent IS.

The content is the set of reports or requests (GET) that the IS can respond with. These are only for human (or agent) interaction.

Limiting the IS to these interactions is both an enormous simplification and also a very beneficial constraint. It is very similar to the constraints that made IMS and CICS so successful (message in, message out, one transaction).

It also fits perfectly with the most successful model of interaction since CICS, namely the Web HTTP model, REST.

It also fits perfectly with the extremely successful staged event driven approach, SEDA, which many of us have been using for some time.

What makes Events and Content so good, is that it forces you to stop thinking in terms of contingency (things that happen to be the case, but don't have to be the case) and start thinking in terms of what has to be the case - namely the business events.

Fitting in VP and T makes you put the IS in its enterprise context. The P is the set of possible interactions it can have with other systems. In value chain terms, these are the trading partner profiles available. The T is the level of trust established when a TPP is used in an actual agreement, a trading partner agreement (TPA). V is the overall context for the two partners agreeing to trade.

The breakthrough for me was when I realised that any IS to IS interaction is a TPA unless it is brokered.


John Schlesinger
Mobile: +44 7794 353 356

Adrian Apthorp

Oct 4, 2009, 6:30:55 PM10/4/09
to vpe...@googlegroups.com

I think this is where VPEC-T provides the opportunity to take a more a
holistic (I remember your comments on TOGAF 9) approach to design. i.e.
the key elements of design at the enterprise level are surely the
policies and relationship systems rather than the technical structure.
This starts to sound like social engineering and I guess in some senses
it is. So for me the key design artefact of VPEC-T is the P (i.e. the
rule book). Ultimately policies and their enforement (values and trust)
will drive the emergence of the system. Striking the balance on the
right level of policy can make the difference between chaos and
overburdened bureaucracy or a system that grows and develops.


Adrian Apthorp

Oct 4, 2009, 6:32:59 PM10/4/09
to vpe...@googlegroups.com
Need to find the time, but John Holland's book (Hidden Order) is a
classic reference on complexity.

Christopher Bird

Oct 5, 2009, 7:00:26 AM10/5/09
to vpe...@googlegroups.com
Policy is an interesting dimension. Like most of the VPEC-T dimensions it applies at many levels. So for example, we can see policy being applied in a granular sense - within a node in a value network. Policy being almost rule based. "We have a policy that people with a credit score of less than 600 may not be overdrawn by an amount that is greater than 5% of the average collected balance of the last month". Versus, "We have a policy that allows us to set limits on overdrafts". The latter applies during business analysis and is indicative of organizational policy. The former deals with specific implementations (not necessarily IT system implementations) of that policy.
When we apply the specific rule (policy), "The people with a credit score...", then we have a whole army of events to which a system has to respond. At that point the PEC pieces really earn their money. The policy has to be "checked for compliance" any time an event is raised which alters either the collected balance or the credit score. That's a kind of local (narrow scope) event - at least at the moment.
We find situations where policy can be applied more widely. Let us imagine that the bank has policy stating that the loan:asset ratio can be no greater than 10:1 (would that we were only that leveraged!). Now any event that attempts to increase the loan balance outstanding must have to be checked agains that policy too. So we can see 2 (almost) independent things happening. Checking the overdraft rule and checking the loan:asset ratio. These are independent actions, but must be taken into account together. Broad policy application would apply the rsults of these narrower policy applications.
This unpacks further because the decision whether to allow or deny the overdraft is actually not made in the individual transactional systems, but in some broader risk management policy  engine.
In my own thinking, I have broadened this kind of policy behavior to become "situational awareness" where there is a need to look at autonomous event "handlers" and treat them all in some aggregate sense, with a need to provide results in an ever decreasing time window.
Taking John's observations to heart, the small interactions can be handled quite RESTfully, but if we do that we have to also have the possibility that we will have to have policy handlers that span the small interactions.
Reply all
Reply to author
0 new messages