IP network design, part 1: Fundamental principles

A competent network design is the foundation upon which all successful network implementations are built. This is the first of four articles that focus on the design of IP-based networks, due to the prevalence of IP as the de-facto standard desktop protocol.


A competent network design is the foundation upon which all successful network implementations are built. This is the first of four articles that focus on the design of IP-based networks, due to the prevalence of IP as the de-facto standard desktop protocol. The applications that a state of the art IP network supports have become increasingly diverse in nature. Along with traditional data applications, IP has become a transport mechanism for real-time applications such as voice, video and multimedia. As a result of the heterogeneous nature of modern applications, the design of IP internetworks has never been more challenging. This article discusses the fundamental principles that should be followed when designing a network. Subsequent articles will deal with specific LAN and WAN technology with which a proficient network design can be implemented.

The need for a design plan

Most IP internetworks can be thought of as falling within one of two categories in relation to their design. There are the networks that have clearly been well designed and there are those that have merely been pieced together over time. The perceptible difference between these two types of networks illustrates the importance of good design. A network that has been well designed is characterised by predictability and consistency in relation to each of the following areas:

A consistently high level of performance is observed in relation to the major network performance parameters. These parameters might include application response time and the variation in response time.

The network should provide a resilient platform for the applications that it supports. A highly specified network might have to meet an availability target of 99% for all applications with a 'zero-downtime' requirement for mission critical applications. Ideally the failure of any one link or networking device along the client to server path should not result in the loss of a client-server session. Automatic failover to an alternate path should occur within a time-interval that is short enough to minimise the effect on existing sessions. This time-interval is called the convergence time, which can be defined as the duration from a network topology change (such as the loss of a link) occurring until each device on the network is aware of the change. Well-designed networks are characterised by consistently low convergence times.

A scalable network is capable of adequately supporting growth without having to be radically re-designed. Growth in terms of the number of users, the number of network nodes or sites must be catered for along with the possible addition of new applications and the increased bandwidth consumption that they will entail. To obtain a feeling for how scalable your network is, ask yourself the following questions: What if there was twice the number of users, twice the number of nodes, new applications that demanded twice the bandwidth?

IP network design series

Part 1: Fundamental principles

Part 2: The IP addressing plan

Part 3: Designing the wide area network

Part 4: LAN design

A scalable network can accommodate this growth and change without requiring a significant overhaul of its infrastructure. The fundamental network topology and the technology employed should not have to be redesigned in order to accommodate growth. New nodes and users can be added to a scalable network in a simple building block approach. The new nodes, for example, should simply entail the addition of a new section or block to an existing structure, which is the core or backbone of the network. Increased bandwidth demands should be accommodated by appropriately augmenting the LAN and WAN bandwidth as necessary.

Certain operational upgrades may also be required during the network lifetime such as increased memory and processing power on the network routers and switches. However, what should not be required is a radical overhaul of the network infrastructure in order to support projected growth during the network's lifetime. This is, after all, one of the fundamental reasons why a network plan is put in place to begin with.

Running costs
There is no getting away from the fact that cost is the most fundamental driver behind the network design process. Networks must not only meet a certain technical specification, they must also be cost-effective in their design and implementation. The main cost component to owning a network is usually the WAN costs from the service provider. That is the cost of the frame relay, ATM, leased lines or ISDN technology.

Network designs are typically characterized by a trade-off of cost versus performance and availability. For example, more bandwidth may be required to ensure optimum application performance, however there is usually a cut-off point where purchasing more bandwidth is no longer cost-effective. Similarly, backup circuits or ISDN may be required to ensure resilience along the client to server data path in the event of a failure on the primary data path. This backup technology must be of similar speed to the primary link in order to avoid degraded service during a fault condition. It is an economic decision for the customer whether or not degraded service can be tolerated during a fault scenario.

A well-designed network should not only be cost-effective to operate; it should also be characterized by relatively consistent running costs. One of the best illustrations of the importance of consistent and predictable running costs relates to the question of support costs. The second largest component to the cost of owning a network (after WAN costs) is the cost of support. It is also the most overlooked cost element mainly because it is notoriously difficult to quantify. For example it may be decided to follow a private-managed implementation of ATM in order to reduce the WAN costs that would be incurred from an ATM service provider. While this would undoubtedly reduce WAN costs, it would also result in increased support costs. A significant level of expertise is required to support a private ATM network. Hiring and retaining such expertise is expensive. However without having such expertise in-house the cost of network support is likely to be even greater with the need for external consultants and other third parties being used to 'fill the gaps' and ensure a smooth daily operation.

Design Objectives

It is imperative to set clear design objectives at the outset of the design process. These objectives relate to the parameters by which a network design is evaluated. Key performance parameters must be identified and have target values assigned to them. These performance targets are ultimately dictated by the application requirements.

To assign these targets in a meaningful manner the application must be understood at both a quantitative and qualitative level. The bandwidth consumption associated with the application must be evaluated in order to provide the capacity necessary to meet performance targets. The sensitivity of the application to packet loss, packet delay and variation in delay must be clearly understood. This is particularly important on modern IP networks where multiple heterogeneous applications are to be supported. Data applications that employ UDP transport are more seriously affected by packet loss than reliable connection-oriented TCP-based applications. Conversely, real-time applications such as voice, video and multi-media are more tolerant of packet loss than they are of delay and variations in delay. Thus the different network applications may require that different quality parameters be prioritised.

Target values should be set for network availability or downtime. This target, like the performance targets, serves as a quality benchmark during the design process. The tolerable level of network downtime is heavily related to the nature of the business itself. The effect of application unavailability can vary from the loss of tens of thousands of dollars per hour in the financial sector, to the potential loss of human life in the medical sector. An estimate should be provided for the scale to which the network is likely to grow. This should entail a projected estimate for growth in the number of users, network nodes, geographically dispersed sites and, arguably most importantly, growth in application traffic. It then becomes the designer's task to put a network plan in place that will serve to accommodate this growth. Designing a network to the performance, resilience and scalability specifications is of little use if it is not a cost-effective solution. The designers must be keenly aware of the budgetary constraints in order to make intelligent cost versus availability trade-off decisions.

Achieving the design objectives

Network design requires extensive practical experience combined with a theoretical understanding of the technologies and how they relate to one and other. Hands-on experience is particularly critical and this is often overlooked. An engineer who has not got extensive network support experience is, in my view, not yet equipped to work in design. The tools that enable you to achieve the design goals are encompassed in the technology itself. You need to have a good knowledge and understanding of this. For example, scalable routing protocols, cost-effective WAN transport technology, network management and so on. I would always recommend laboratory work in performing proof of concept tests. Design must be performed in a lab rather than as a theoretical paper exercise. The multitude and interaction of so much technology is simply too complex to verify in anything other than a real-life test bed. The following steps provide an approximate guideline that could be used to approach the fundamental tasks to be followed during the design process:

(i) Determine the performance parameters that best specify each of the design goals. For example application response time, percentage packet loss, latency, and application availability.
(ii) Identify any design constraints. The most obvious constraint is budget. Other constraints may include implementation timescale, support of legacy equipment, incorporation of specialised departments that require unique network specification and policy.
(iii) After considering the constraints, set targets for the relevant network performance parameters.
(iv) Commence a high-level design. This is intended to resolve major issues such as the choice of WAN technology and equipment, the IP addressing plan, the degree to which routing is used instead of switching and so on.
(v) This high level design should then be compared to the constraints. If the constraints are not met an iterative step backwards is required. In the event of the constraints being met the design process can proceed.
(vi) A specific network design plan can now begin to be formulated. This addresses all technical details and alternatives for the design.
(vii) Each major aspect of the technical solution should be lab tested. The application response and availability characteristics should be tested in a lab. This will facilitate an iterative refinement of the technical solution.
(viii) The design is complete when the technical design is fully refined. In some cases the final lab tests may indicate that the fundamental performance targets or constraints are unrealistic and may have to be revised and compromised. It is however an aspiration to tentatively finalise these parameters at the high level design stage.

Network design principles

I will now summarise some of the key principles that must be followed for successful network design. Many of the poorest network implementations that I have seen have ultimately arisen from the fact that these network design principles were not observed.

Application drives the design requirements. The network is the structure that facilitates the application. Without understanding the application characteristics and its requirements the network cannot be designed.

Network design requires experienced personnel. The network design engineer requires broad practical experience combined with a theoretical understanding of the technologies and how they relate to one and other. Extensive practical experience should be thought of as a necessary prerequisite to a design role. You cannot design a network without a reasonable understanding of how its operates.

Networks are designed in a lab rather than on paper. A lab is the single most important design tool. Given the complexity of the more advanced internetwork designs, a design is not valid until it has been verified in the lab. Network modelling software is also not to be trusted. Internetworking entails a multitude of complex technologies that must successfully interact with each other. The design of large or complex networks cannot be reliably modelled in my view. Such modelling is only appropriate for high-level design. When resolving specific technical detail, a lab is required.

Network design usually involves a number of trade-offs. Cost versus performance and availability is usually the fundamental design trade-off.

Don't try to mirror the corporate structure. The network design and topology can often mirror the corporate structure of the organisation. While attempting to mirror this structure is not necessarily to be discouraged, the network designer should certainly never become enslaved by it. Such an approach can result in fundamentally flawed designs. Remember the design objectives are the only essential driving force behind the design.

Vendor independence. Proprietary solutions are not to be encouraged but they should not be automatically avoided either. There are instances where dominant vendors can provide the best solution.

Keep it simple. Unnecessary additional complexity is likely to increase the support cost and may make the network more difficult to manage. Also, each time a needlessly complex solution is employed it is possible that an additional piece of software is being used that may have bugs in it. The simplest viable solution should always be implemented. Increased complexity is only justifiable if there is a related benefit or requirement.

Design every network on its own merits. Do not work to a set of rigid and possibly over-generalised design rules or templates. Consider every network on its own merits and avoid copying existing solutions simply because the networks appear similar. Avoid the bleeding edge. Only use mature and well-tested software and hardware for all devices on the network.

The fundamental design plan must not be compromised. The design may have to show some degree of flexibility and evolve with the network. This relates to the requirement for a scalable design. However it must not be compromised at a fundamental level. For example, if you are implementing a three-layered WAN hierarchy, do not compromise this by adding another layer. This comprises and invalidates the original design by either adding another layer or by 'mixing and matching' layers. If the original design is repeatedly compromised for the sake of 'quick fixes' then at some point the design becomes eroded into oblivion and there is no longer a network design in place. A network design is merely an academic exercise if it is not fully and precisely implemented as per the original design plan. No changes should be made to the original design without the endorsement of the engineers who formulated that design.

Predictability is the hallmark of a good design. Predictability and consistency in performance, resilience and scalability is a characteristic of a well-designed network.

Design it once or design it a thousand times! If a network was not designed properly at the outset or if that design was compromised then everyday tasks such as troubleshooting and adding new devices to the network become design projects in themselves. This is because without a valid design that has been followed, basic network changes do not form part of any plan. Thus they must be treated as isolated projects. There is no predictability and the effect of any changes on the network must always be independently assessed if the design plan has been deviated from. This is what I call designing a network "a thousand times."

Design requires a small capable team No one person, no matter how skilled or experienced should be the single and absolute authority in designing the network. Designing a network involves balancing priorities, performing trade-offs and addressing a broad range of technical issues at both a general and detailed level. People with different specialities and strengths are required in a design team. Some may focus on the general while others may be sticklers for the specific details. However, I would even more strongly emphasise that committees should not design networks. A small team of capable engineers who report on a general level to management should resolve the details of the network design process.

Dig Deeper on Network infrastructure

Unified Communications
Mobile Computing
Data Center