Category Archives: Asymmetric Design

Responding to asymmetric demand through a collaborative process.

Evaluating platform architectures within ecosystems: modeling the supplier’s relation to indirect value

by Philip Boxer, PhD

I have completed a PhD by publication at Middlesex University’s School of Engineering and Information Science under the supervision of Professor Martin Loomes.  Here is its abstract:

This thesis establishes a framework for understanding the role of a supplier within the context of a business ecosystem. Suppliers typically define their business in terms of capturing value by meeting the demands of direct customers. However, the framework recognises the importance of understanding how a supplier captures indirect value by meeting the demands of indirect customers. These indirect customers increasingly use a supplier’s products and services over time in combination with those of other suppliers . This type of indirect demand is difficult for the supplier to anticipate because it is asymmetric to their own definition of demand.

Customers pay the costs of aligning products and services to their particular needs by expending time and effort, for example, to link disparate social technologies or to coordinate healthcare services to address their particular condition. The accelerating tempo of variation in individual needs increases the costs of aligning products and services for customers. A supplier’s ability to reduce its indirect customers’ costs of alignment represents an opportunity to capture indirect value.

The hypothesis is that modelling the supplier’s relationship to indirect demands improves the supplier’s ability to identify opportunities for capturing indirect value. The framework supports the construction and analysis of such models. It enables the description of the distinct forms of competitive advantage that satisfy a given variety of indirect demands, and of the agility of business platforms supporting that variety of indirect demands.

Models constructed using this framework are ‘triply-articulated’ in that they articulate the relationships among three sub-models: (i) the technical behaviours generating products and services, (ii) the social entities managing their supply, and (iii) the organisation of value defined by indirect customers’ demands. The framework enables the derivation from such a model of a layered analysis of the risks to which the capture of indirect value exposes the supplier, and provides the basis for an economic valuation of the agility of the supporting platform architectures.

The interdisciplinary research underlying the thesis is based on the use of tools and methods developed by the author in support of his consulting practice within large and complex organisations. The hypothesis is tested by an implementation of the modeling approach applied to suppliers within their ecosystems in three cases: (a) UK Unmanned Airborne Systems, (b) NATO Airborne Warning and Control Systems, both within their respective theatres of operation, and (c) Orthotics Services within the UK’s National Health Service. These cases use this implementation of the modeling approach to analyse the value of platforms, their architectural design choices, and the risks suppliers face in their use.

The thesis has implications for the forms of leadership involved in managing such platform-based strategies, and for the economic impact such strategies can have on their larger ecosystem. It informs the design of suppliers’ platforms as system-of-system infrastructures supporting collaborations within larger ecosystems. And the ‘triple-articulation’ of the modelling approach makes new demands on the mathematics of systems modeling.

The following summarises the argument in terms of Value for Defence:

Value for Defence View more presentations from Philip Boxer

Ideologies of Architecture

by Philip Boxer

While following up on a NECSI and MIT ESD seminar, I came across this paper by Joel Moses on three different organizing ideologies for the design of large scale engineering systems.   He summarizes the three as follows:

Such approaches are usually called design methodologies. We discuss the top-down structured methodology, the layered or platform-based methodology, and the network-based methodology. Such design methodologies are associated with organizational structures or architectures, such as tree-structured hierarchies, layered hierarchies and generic networks. We point out how these design methodologies relate to cultural attitudes toward engineering.

While the systems engineering ideology is still rooted in the ‘tree-structures’ of hierarchical decomposition, its more recent recognition of ‘directed’ and ‘acknowledged’ systems of systems (SoS) belong to the ‘platform-based architectures’ that fall comfortably under the canon of Product-Line Practices1. The challenge comes with the ‘network-based architectures’, in which layering becomes an emergent property of the network. The environments in which these architectures are to be found exhibit Ultra-Large-Scale characteristics, the forms of social organization being supported are collaborative and co-producing, and the architecture of the systems of systems supporting these environments themselves need to be collaborative.

The difference between the platform-based and network-based architectures is whether or not layering can be treated as an a priori property of the architecture.  Network-based architectures enable the emergence of multiple forms of layering with respect to the multiple concurrent forms of collaboration that they support.  There is no longer a direct relationship between design and predictable uses, rendering the design of network-based architectures asymmetric to the architectures of demand.

[1] Note, however, that the ‘platform-based architectures’ that fall under Product-Line Practices are restricted to supporting a one-sided relation to markets.  They are to be further distinguished from the multi-sided platform architectures that are network-based that support multi-sided relations to demand. See what distinguished a platform strategy.

Why critical systems need help to evolve

by Bernie Cohen

I attended the Schloss Dagstuhl conference in December 2009. my argument was that critical systems evolve because they are embedded in socio-technical ecosystems.  This subsequently became a paper published in IEEE Computing in collaboration with Philip Boxer, with the following abstract:

Classical engineering fails to model all the ways in which a critical sociotechnical system fits into a larger system. A study of orthotics clinics used projective analysis to better understand the clinics’ role in a healthcare system and to identify risks to the clinics’ evolution.

The paper was part of a special issue bringing together software engineering researchers and practitioners focused on evolving critical systems. The introduction identified five game changers framing the research agenda in this field:

  • Software ubiquity. More software is being deployed in more consumer devices, meaning that failures are more likely to affect ordinary people.
  • Software criticality. As software embeds itself deeper into the fabric of society, single software failures have greater potential to affect more people. This increases the potential for software to be considered critical even when it isn’t complex.
  • People-in-the-loop. As software is deployed to control systems in which human actors participate, the issue of human interactions with software becomes more important.
  • Entanglement. Software dependencies have become more complex, and much real-world software is entangled with software developed by third-party providers.
  • Increased evolution tempo. The tempo of evolution will continue increasing as users expect more from software. The software market is often unforgiving when even small changes can’t be done cheaply and quickly.

The hole-in-the-middle

by Philip Boxer

The blog on the health service distinguished between three levels of involvement with the patient moving from (1) being centred on providing specific treatments, to (2) being centred on episodes of care, to (3) being centred on the patient’s experience of care over time. These are levels originally separated out by the paper by Prahalad and Ramaswamy on The New Frontier of Experience Innovation. They made the more general distinction between competing in a product space, a solution space and an experience space.

The point they were making was that the third of these required a fundamentally different approach to the relationship to the customer, which I have described in terms of rcKP and the third asymmetry.

The blog gave an account of this difference in terms of changes in level of governance architecture – from the relatively internal concerns of the first two levels with the governance of care provision and of clinical referral pathways, to the through-the-life-of-the-condition concern with the patient’s care at the third level. It then concluded that this third form of governance:

“… in turn requires forms of support and transparency that can enable such change to happen, by providing funding for the transition, by providing support for this way of working out how to effect change, and by ensuring that the changes made can be sustained in a way that is accountable.”

Putting this together into a 3 x 3 creates a value stairs – establishing where you are and where you need to be on this value stairs, given the competitive asymmetries in force, is fundamental to deciding how to exploit the three potential asymmetries. Working with another client gave another perspective on the challenges involved – a telecommunications service provider whose role it was to provide just such forms of support and transparency.

In this case the levels in the value stairs were expressed in terms of the contractual framework within which the relationship with the customer unfolded over time. What characterised the resultant space as a whole was that the bottom-left three squares were very efficiently occupied by the enterprise on the basis of commodity services, while the top-right three were provided on a cottage industry basis by a high value-adding consultancy and bespoke services to relatively small numbers of large enterprises.


Given that competitive forces were driving the enterprise up the value stairs, the challenge had become the hole-in-the-middle. This was too expensive to satisfy by using the bespoke approach used top-right, and the variety of demands too complex to be satisfied using the bottom-left commoditised basis. In terms of what we need to learn about complex systems, the challenge was to find ways of operating in the collaborative quadrant below:

What was the answer? To start with, the whole business infrastructure had to be digitised so that it could be offered on a service-oriented basis. Then to leverage this capability, different ways of managing the relationship with the customer had to be found – the enterprise had to develop an approach to managing this infrastructure that could be dynamically customised from the edge of the business. This they are still in the process of doing.

More than socio-technical systems analysis

by Philip Boxer

Larry Hirschhorn and his co-authors raise an interesting question in their paper on sociotechnical systems in an age of mass customisation. They consider what happens in a pilot plant whose sole object is to learn new ways of organising production processes. What they discover is that in the place of worker autonomy as a goal, the meaning of the work becomes pre-eminent, and creating task boundaries becomes a dynamic collaborative process in a way that dissolves the old worker-manager distinction. This focus on meaning goes beyond the old focus on improving the quality of life in stable production environments:

“… When socio-technical systems theory (STS) first emerged as a discipline its moral roots in a worker’s right to competence and its political roots in industrial democracy enabled its practitioners to reach beyond the narrow issue of industrial efficiency, but the era of mass customisation has so up-ended the occupational structure – the distinction between working and managing is slipping away – that STS, a creature of the era of mass production, may slip into history.”

So in what ways must our understanding of socio-technical systems be extended to build on their rich legacy? Two points emerge as being key:

  • the dynamic nature of the relationship that is needed with the context for the work of the pilot plant in terms of what the customer wants, and
  • the meaning of the work within the larger context of the enterprise and its goals.

The first of these reflects the shifting of power over service design to the edge, arising from having to address the third asymmetry. The second raises the larger question of how that ‘edge’ is defined in the interests of the enterprise when demand becomes asymmetric – the what, how, who/m and why all have to be made responsive to demand.

So what does this require of the kinds of modelling we use? Two kinds of innovation are needed.

Firstly, we need to use an approach that can model the structure-determining processes as well as those that are structure-determined.

Secondly, we need to add to the models of task, information and sentient systems the related models of the organisation of task and information systems, and of the contexts out of which demands are arising. This gives us five distinct perspectives on the enterprise:

    (1) the task systems, (2) the information systems, (3) the vertical (hierarchical) and (4) horizontal (collaborative) organisation of those task and information systems, and (5) the organisation of demand within its customer context.

Putting all of these together as a composite model of the ways these systems are or are not consistent with each other is itself an expression of the ‘I’ of the modeller(s). And this is a way in which to collaborate in the construction of shared meaning.


by Bernie Cohen
We see news today in Parliament and the Guardian that 20 leading academics have sent an open letter to MPs questioning whether the £6.2bn project to upgrade the UK’s National Health Service IT system will work. At the heart of this system of systems is the use of the electronic health record (EHR) and the ways in which it can be made available and shared. The approach is based on one of standardisation across the NHS as a whole, but the view of the academics being reported on is that the meaning of its content must frequently be contingent not only on who wrote it, but also on the context in which it is being read. The concern is therefore whether the fundamental premise on which the EHR is being built is flawed in some way. What underlies this concern?

Languages in general, and programming languages in particular, are formalised at a number of levels: lexical, which determines what sequences of symbols constitute valid terms in a language; syntactic, which determines what sequences of terms constitute valid statements; and semantic, which provides a mathematical domain and determines what object in that domain is denoted by each valid statement.

The only meaning that can be attributed to a statement in a language so formalised is the mathematical object that it denotes and the treatment of this kind of meaning, called denotational semantics (cf Strachey, Milne, Stoy, Schmidt etc.), is a large and complex mathematical field involving algebraic topology and category theory (cf the LNCS series on ‘Category Theory and Computer Science’).

Such statements may also be intended to have meaning in domains other than those of mathematics: in physics, sociology, anatomy, physiology, psychology, commerce, etc. In that case, the statements in the language comprise a model of some part of that domain and the intent is usually reflected in the names used to decorate statements: variables, procedures, types, classes etc. These names do not themselves guarantee that the statements provide a valid model of the domain. That is a matter of observation and experiment, which may be assisted by mathematics (in the entailment of consequences and the demonstration of inconsistency) but cannot be completed there.

This level of language description was first explored by the scholastics in the 14th century (cf the ‘Supposition Logic’ of Petrus Hispanus) but was neglected in the 17th and 18th centuries when the powerful methods of Newton, Leibniz, Laplace and Lagrange encouraged belief in a mechanical universe. It was revived in the 19th century by Charles Sanders Peirce who, independently of, and more extensively than Boole, formalised logic but also recognised the need for the other level of meaning, which he called ‘pragmatics’.

Pragmatics is concerned with value as experienced by the subject of a statement, an embodied individual (although the term was interpreted more widely than that by James et al). Peirce recognised that pragmatic considerations lead the individual to make distinctions in her world — ‘differences that make a difference’, as he put it — that are reflected in her statements. A collection of such distinctions he called an ‘ontology’. Since ontologies are essentially individual, it not being possible for there to be a ‘universal ontology’ (such as those proposed by Porphyry and Leibniz). Despite this, we succeed in communicating with each other as individuals because we all encounter the same objective reality, with which our separate ontologies must needs be consistent. Further, in many fields of human endeavour, such as medicine or engineering, a large community shares a common collection of distinctions, usually promulgated by an education system, which constitutes a locally universal ontology.

This is what archetypes are for. They record the locally universal ontology of a domain of discourse and provide that ontology with an abstract syntax and a denotational semantics.
A system of archetypes is therefore contingent on the state-of-the-art in its domain of discourse, and on the context of other domains’ locally universal ontologies, in which it is deployed.

[It is worth noting that the version of VistA developed in Seattle in the ’90s attempted to achieve something similar, but in reverse. It was front-ended with AI-based systems, called ‘cubes’, that were supposed to ‘project’ the data in patient records into the ontologies of specialist domains, both medical and administrative. Serious problems arose in the implementation of the cubes due to what was called ‘dirty data’, that is, fields whose recorded values did not satisfy the constraints demanded by the projection semantics. In retrospect, this was clearly an ontological problem.]

Thus, the archetypes of openEHR will, of pragmatic necessity, change with medical theory and practice and may have to be altered when composed, locally, with those of other domains with which the practice of medicine and the needs, acute and chronic, of the patient are involved, such as prosthetics, social services, hospital administration, psychotherapy, legislation, finance etc.

The composition of disparate, locally universal, ontologies was recognised by Peirce to be a deep and difficult theoretical problem, which he left as an open question. It is still unresolved. Since only the concerned individuals may negotiate a shared ontology, ontological composition cannot be completely automated. As yet, neither openEHR nor any other EHR system has anticipated this issue.

As we move into a technological era in which socially critical systems are built around large and complex, locally universal ontologies, such as openEHR, the Semantic Web, e-government and Network Centric Warfare, we will need increasingly powerful tools and methods to mediate pragmatic and ontological negotiations among embodied individuals. One such set of tools and methods, built around BRL’s PAN (Projective ANalysis), is currently being deployed within the context of its associated methods of asymmetric design.

Our goal is to be able to meet the challenge of managing the dynamic adaptability of large complex systems-of-systems to evolving and disparate contexts-of-use.

Business as a Platform

by Richard Veryard

A business can be regarded as a platform of services. This has important implications for the (variable) geometry of the single firm, as well as the interoperability of multiple firms.

Amazon is a platform. eBay is a platform. (See report on eBay by Dare Obasanjo). Their business model involves providing services that other companies can build upon. Following this thinking, we end up with a stratified business stack, with businesses building upon other businesses. This is the world of the mashup – but it is also the world of serious enterprise interoperability.

Many businesses are trying to turn themselves into platforms. In his post on Disney, Pixar and Jobs, John Hagel argues the point for media companies. (I mentioned this briefly in my post on Disney, Pixar, Apple and Jobs.)

In a world of scarce attention, creators of media products will need to compete with those who re-conceive media products as platforms. What is the difference? Products are designed to be used on a standalone basis – you buy it and you view it or listen to it in the specific way the content creator intended. Platforms are designed to be built upon – they create opportunities for the original creator, third parties or the customers themselves to extend, enhance and tailor the content in ways that the original creator never anticipated. Offered as a platform, content can create far more value than any equivalent standalone product.

Many companies already have a platform, but they are trying to raise it. For example, the traditional role for telecoms companies is as a platform of telecoms connectivity. But it has been obvious for ages that there is no long-term profitability for telecoms from providing services at this level. So telecoms companies have long understood the need to raise the platform, to offer higher-value services. But they are still struggling to formulate and implement this strategic change. Why is it so difficult?

One reason for the difficulty comes from the asymmetry of demand, which generates complexity in the business stack. The height and configuration of each platform is a difficult strategic question: too low and you leave a value deficit, too high and you lose the economies of scale or scope, too inflexible and you can’t respond to change.

And how is the whole stack going to be organized, for whose benefit? This is a key question for asymmetric design.

The Double Challenge

by Philip Boxer
Larry Hirschhorn recently referred me to a book on the impact that network forms of organisation are having on the nature of work: “Fragmenting Work: Blurring Organizational Boundaries and Disordered Hierarchies”. His comment was as follows:

“It’s a serious book. The authors argue that the return of the network form of organization, while it has some economic logic sometimes, is often just a political solution that has become available, and that it is not necessarily more rational, but may benefit some interests, often at the expense of workers who are poorly treated, promoting an atmosphere of rootlessness that is no good for the more steady social system that is necessary for substantial innovation, rather than the rapid marginal effects of edge initiatives. So there’s lots of stuff about how edge work is different, and hard on the worker.”

He went on to ask whether this is a necessary consequence of ‘edge’ forms of working. I don’t think it has to be. Rather it needs to be seen as a consequence of misalignment between forms of organisation and the response to demand: a failure to meet a double challenge.

This double challenge can be understood in terms of the following double diamond, in which each side presents a challenge, to which is added the need to match the relationship being demanded on the right with a corresponding (mirror-image) basis of authority on the left. Thus increasing demands from the customer on the right for customisation and timely coupling with their individual context-of-use (an ‘edge’ relationship) are not matched by a correspondingly appropriate span of responsibility and accountability to the customer’s situation on the left:

So the double challenge involves not only responding to the customer’s demand, but also creating the organisational context that will sustain that response. It is this second part of the challenge that is not being taken up. Thus:

  • Historically, the assumption was that interoperability was endogenous to the enterprise silo, and so could be resolved hierarchically through processes of deconfliction (i.e. through accountability to hierarchy instead of to situation).
  • With the flattening of (vertical) hierarchies and growth of horizontal linkages between them, enterprise silos are being faced increasingly with interoperability that is exogenous to the enterprise silo (i.e. the span of complexity required exceeding the span of control).

Under stable conditions of demand, this flattening just amounts to using technology to take costs out of existing forms of organisation. But demand is not stable, and the big (‘21st century’) challenge is managing the risks arising from addressing new forms of demand within this new environment. The book is right in saying that this flattening can be very destructive. If it is just about taking out costs and not really addressing the change/development agendas, then it is very punishing on the people working within them because they are continually being expected to do more than their role is set up to do. The effect is that people are increasingly expected to work across a span of complexity that stretches beyond the hierarchies to which they are being held accountable, producing burnout, dependency on informal networks and long-term exhaustion.

So what is the solution?
The traditional way of managing interoperability is through establishing forms of vertical transparency consistent with the way in which the constituent activities have been deconflicted. The new forms of edge role require new forms of horizontal transparency that are consistent with the horizontal forms of linkage needed across enterprise silos to support them. Horizontal transparency enables different forms of accountability to be used that take power to the edge, but which in turn require asymmetric forms of governance. (see the paper on “Taking Governance to the Edge”). Asymmetric design is our name for a process that supports asymmetric forms of governance, establishing the horizontal forms of transparency needed to sustain new forms of response to demand at the edge.

Interoperability Landscapes

by Philip Boxer
The word ecosystem is beginning to be used for a clustering of competing services around shared resources (see for example “The Move to Web Service Ecosystems”, BPTrends November 2005). Good examples of these competing services are provided by programmableweb, which tracks the ways in which mashups are being built from supporting APIs. John Musser very kindly provided us with some of their data, and we have produced the following interoperability landscape from it.
The underlying data is a matrix of mashups against APIs, so a high ‘q’ shows an API as being used by a large number of mashups (e.g. Google Maps). And for any given level of ‘q’, we can then identify the number ‘k’ of other APIs used by that number ‘q’ of shared mashups (e.g. Amazon, Flickr and at q=5). The result is a landscape in which the clusters of peaks and foothills indicate ecosystems of mashups built around common APIs. In the landscape we have picked out three ecosystems which stand out particularly – it is enlightening to see how Microsoft’s exclusivity is reflected by the isolation of its ecosystem, although what this doesn’t show, of course, is Microsoft’s dominance within corporate silos:

This visualization of an interoperability landscape is a powerful way of showing the value that comes when things are combined with other things, in this case describing a layer mediating between the demands of users within their contexts-of-use and the supply of services from APIs. We are interested in using the underlying form of analysis to understand how particular new forms of demand span gaps in the existing landscape. These gaps may identify opportunities to support new forms of demand, so the next stage would be to look more closely at the existing forms of demand being satisfied by the mashups. To do this, we would need to characterise the different kinds of demand situation these mashups are responding to, described in terms of different kinds of context-of-use rather than another level of (aggregated) functionality.
We are dealing with a kind of cycle here (as in the paper on asymmetric governance) in which mashups are most likely to be where new forms of interoperability can get established, corresponding to the emergence of “pull” models of business.

Asymmetric Design

by Richard Veryard
Philip’s post on Asymmetric Demand described a situation in which the forms of demand are increasingly specific to the context in which they arise.

In a situation where Asymmetric Demand prevails, the business design response may be either Symmetric or Asymmetric. Symmetric Design acts as if the Demand were Symmetric (or near-enough Symmetric). Symmetric Design will often produce results that appear acceptable within a static view of demand (as demand becomes more dynamic, this view has to become narrower and more short-term); but show their inadequacy if demand is assumed to be dynamic (requiring real-time business models).

In contrast to this, Asymmetric Design presumes that the aim of the business design process is not to render the demand symmetric, but to manage the asymmetry through a continuous and continuing process.

To maintain an Asymmetric focus, we must address three things:

1. Establishing a collaborative relationship between supply-side and demand-side – this typically takes the form of a joint venture between user and provider, with appropriate mechanisms for managing and sharing the risks and rewards.

2. Joint appreciation of what is driving the asymmetry – this requires the production of a demand-side model that is independent of the supply-side model, so that they can be juxtaposed and assembled.

3. Collaborative composition – the user must be able to compose the service that best approximates to his or her need close to the time of use, and then orchestrate the available supply-side services to support that composition. Depending on the dynamic nature of the demand, this may be a one-off process or has to be supported by a demand-side composition platform (operating in the manner of a ‘wizard’).