Agenda

From Network Complexity

Jump to: navigation, search

Contents

Workshop on Network Complexity

26/26 April 2010, London, UK

Goal of the workshop
Try to objectively answer these questions:

  • what constitutes network complexity?
  • how can we measure it?
  • is there a need to control / contain complexity?
    • if so, how?

Format

  • Mainly discussion, little if any slides.
  • Everybody contributes
  • Objectiveness is key

Draft Agenda

(This is only a starting point – we will adapt as needed)

Monday 26th April 2010

0900 Welcome
0915 Round table introductions

Question for everyone: What is your involvement in complexity?

1000 Keynote (tbd)
1100 Soap box time:

Short (10 min) view point statements

1200 lunch
1300 Work session: “What constitutes network complexity?”
1800 Adjourn
1900 dinner

Tuesday 27th April 2010

0900 Work session: “How can we measure complexity?”
1200 lunch
1300 Work session: “Is there a need to control / contain complexity?”
1700 Wrap-up, next steps
1800 Adjourn

Soap Box Statements

Please add here, in a short paragraph, information, research, etc, you want to contribute, views on things we need to work on (or not work on), what the goal should be, etc...

Soap Box 1: Network complexity depends on state, and rate of change

State and rate of change are key factors in determining overall complexity. The state contains network state (routers, links, OS, config, state tables, etc), NMS state (third party apps supporting the network, such as NMS, AAA, ...), and the "human state" (the number of operators, their combined knowledge and experience).
You can shift state between those component (e.g., do routing via a protocol, or centrally control it from the NMS/OSS). Shifting complexity between components may affect overall system complexity. Rate of change is another important factor - more research needed here.
Bottom line: We need to understand better what the complexity of the separate entities of a network is, how to measure it; then we can start organising the complexity in a way that decreases overall complexity.
M.Behringer 09:42, 25 March 2010 (UTC)

Soap Box 2: ... Complexity: Good, Bad and Indifferent

Mostly the Internet is just plain complicated, but it isn't strictly a complex system (colour me black). From time to time, complex system behaviour does appear to be present (graph and traffic) but in a benign way/Good, in that they permit a system to run HOT (colour me green). Rarely is it bad(colour me red) (race between worm and routing updates is one of the few working practical examples of Bad. Design rules for good v. bad are probably premature - but we can discuss these and maybe they will emerge:)

Jon Crowcroft, 25.3.2010

oh, Fred Baker posted a link to an interesting talk by Clay Shirkey - viz Freakonomics etc, at

[ http://www.shirky.com/weblog/2010/04/the-collapse-of-complex-business-models/]which discusses the collapse of complex business models - its mainly about web 2.0 and the fall of the MayanEmpire (you know these whacky academics who mange to generalise to include two things as disparate as this in one theory:-)

Howevever, after discussion with colleagues here, it is really based in no facts - for example, see this PNAS paper which is actually based in real data: [1] The web 2.0 part is also subjected to a fine roasting at this blog [2]

Meanwhile, growth and prosperity show up here too: [3]

All of which matter because if we decide we want to simplify the Internet a lot, we have to fight a LOT of the same things people have failed to fight in these stories...

Meanwhile, nothing to do with complexity or the internet, but connected with Shirky's freakonomic type studies, this is quite amusing: [4]

Soap Box 3: ... What is the cost of complexity?

I am very interested in two aspects, 1) how does complexity effect the cost? and 2) how does it effect the Quality of user experience,(QoE)? For example, does it cost more to deploy a QoS model or throw bandwidth at the problem? If my network is very complex does it have a detrimental effect on my real time services? Clearly there has to be a degree of complexity to deliver real time services, but how much does it cost and is one level of complexity more expensive than another? Keith Jones QMUL 31/3/10 --

Soap Box 4: ... What creates complexity?

My early background in computing science many years back was in programming languages. Some languages wqere written by an individual, others by committee, and the difference was clearly evident. When a programming language was devised by an individual there was a consistent "feel" to all of the constructs within the language, and once you have identified this theme or pattern, the task of expressing an algorithm within the constructs of the language was amazingly simple and direct. Other languages were the outcomes of committees, and these never had the same consistency or the same ease of use. The constructs were inconsistent and the language was, to me, "complex," simple because it lacked this thematic consistency.

I suspect that network complexity lies along a similar vector. "Simple" networks are behaviourally and architecturally consistent networks. There is a an identifiable pattern of behaviour that allows one to understand how a network acts. Complex networks do not behave consistently, and the lack of an obvious theme or pattern in behaviour responses is possibly what causes them to be termed "complex."

So what creates complexity? We generally don't deliberately intend to create chaotic system, nor systems that behave in ways that become unpredictable and "complex." As individuals we tend to impose order on our environment though use of simple principles applied as consistently as we can. However, I suspect that groups of people have an entirely different dynamic, and the bahviour of groups leads to outcomes that defy logic. The classic example for me is the committee decision regarding ATM cell size, but all group-engineered systems exhibt the same creeping complexity in their inevitable incremental featureism.

If we want systems that behave in ways that Occam would endorse then perhaps we need to stress the importance of a simple and coherent set of architectural principles and strongly resist efforts to augment and ornament and otherwise "add value" to the system.

Geoff (taking a more sociological stance here!)

Soap Box 5: Two Observations

Both of them contributing to (perceived) complexity and both also offering opportunities for simplification (and yes, of course I'm biased towards embedded automations, management and operations)

  • Pulling all strings manually vs. operating from the glass cockpit of an automated (autonomic) infrastructure: part of the success of IP networks stems from their rapid adoption and incorporation of features - historically to migrate legacy networks towards IP, more recently to enable enter entirely new domains and applications. The resulting large number of artifacts that live within an IP network is still today largely managed and operated by a single, central, external authority of human engineers and operators who rely on various tools and management applications.

Can we 'simplify' the network by making (some of) it's components more autonomic and self-managing?

  • Design for Average Abstract Users vs. Typical real-life Humans: today's networks expose a large set of parameters mostly via low-level general purpose interfaces such as cli, snmp, netconf, etc. While this may fulfill formal requirements of abstract network engineers and operators, it ignores the fact that real-life humans working in IT industry have a very diverse set of skills, training, motivations, roles, responsibilities, etc.

Can we 'simplify' the network by offering tailored interaction for (some of) the typical personas interfacing with a network?

See also the 2 slides I uploaded: Media:20100426-Cisco-EASy-SoapBox-bklauser-public.pdf

Bruno Klauser

Soap Box 6: Why is a network complex?

How does complexity manifest itself? Typically when something breaks or it is hard to know how to make a change to move the network from one state to another. Anything else?

In the failure scenario complexity can be seen when the "distance" between a cause and its symptom is large. Typically the symptom is an unexpected consequence of the cause. In other words the network could not be validated fully. The distance can be increased by the number of layers/components between the cause and the symptom. Unfortunately networks are particularly susceptible to this problem as they typically include multiple layers and many components. Software developers can have a goal of writing testable code such as avoiding side effects and limiting the functionality of classes. Can networks be built with the same goals either in terms of design how network operating system features are configured?

The understanding of the intent of a network design often decays after it is built and increasingly so once changes are made to it. Are changes validated against the original design intent? Where is the intent stored? A configuration is not a good place to start understanding what the function of the network should be under different conditions. Was it meant to fail like that? Did the original designer explicitly consider this failure state? It is not often that troubleshooting an unexpected failure starts with reading the 50 page design guide. Engineers and operators often start at first principles with some of that design guide in their heads and in attempting to understand the cause of the failure begin to reverse engineer the configuration to understand what should have happened. Could configuration files be created in such a way as to be more direct in explaining the intent? For example an IP address is assigned to an interface, that IP address is from a larger range to be assigned to interfaces of that function. Where is that functional information stored? Could/should it be stored in the configuration?

Verifying network function is particularly hard. Much of the functionality in a network is related to dealing with component failure. How can the functions of the network including those for dealing with failure (e.g. routing protocols) be tested under all the failure conditions? The typical approach is to build standard modules that can be verified in a lab and build larger networks from those components. Standardisation typically hits capex costs with the goal of saving opex costs. What the optimum number of design variations to minimise the combined cost?

Anecdote: I recently had a conversation with a colleague responsible for desktop computing. He complained that us network folks were always trying to stop things happening rather than enabling. Somewhat surprised by the statement I replied by saying that in fact what we were doing was trying to constrain the function of a network so we had the best chance of understanding how it would behave. On reflection this could be a symptom of complexity in networks. How could we get to the point where plug-and-play (Bruno: autonomic?) doesn't in fact mean increased complexity. Typically this approach works in small to medium numbers of components (because the autonomic behaviour can be evaluated) but doesn't scale well. As a result as do we need to be more explicit about a network's function and constraints as it becomes large?

Steve Youell


Soap Box 7: Can complexity be mitigated?

Is complexity a matter of reaching an inflection point? If you start with simple network, then add more simple networks; keep on adding more simple networks and suddenly you reach a point where it becomes complex? Does an increase in scale automatically result in complexity? Is it a point at which the numbers of variables or interactions that take place within a system, that were once 'predictable', now becomes unpredictable and because it is unpredictable - therefore the network becomes 'complex'? What degree of complexity is acceptable and does that change over time?

One might suggest that network complexity can be looked at in a 'pure' sense as in the complexity of the design and implementation of a network, distinct from 'operational complexity' - namely, how does the network 'behave' or 'operate' is response to various events. Can operational complexity be offset or mitigated by the use of effective tools & procedures? Do we feel that such tools are well developed and common place in operational networks today, or is the network's complexity too readily exposed?

Such a view can be applied not only to 'networks' but the devices that comprise the network. One might suggest that network devices are becoming more complex to operate rather than simpler (incurring an increased opex cost as a result). The number of functions that the device is being asked to perform increases, the number of interactions taking place within the device's operating system increases, the interactions between the operating system and the hardware components increases but are the tools to address and hide that complexity keeping pace? One would suggest that the answer is 'no'. If one were to be able to provide appropriate tooling could the level of complexity be held in check? Could this be held in check as scale increases? Could the complexity even be reduced?

Joel Obstfeld

Personal tools