Switching architectures in a data center

You know that in a data center environment there is a number of servers, storage devices, networking devices that are involved to provide the functionality as a data center. In order to provide the connectivity between all these devices isn’t it necessary to connect these devices into network and data transmission between these devices are securely and effectively handled. These interconnections through a communication network called the data center network (DCN). When we say that interconnection between devices what would be the method used to achieve this? 
The answer is as simple as possible and nothing but switching is the methodology that we can rely on. In a data center environment, we will see what are the switching architecture that is used in a data center environment. 

Ways to Innovation

Over the years there has been a number of innovations and improvements to this area. Due to tremendous growth in the computational power, storage capacity and the number of interconnected servers, the DCN faces challenges concerning efficiency, reliability, and scalability. Although transmission control protocol (TCP) is a time-tested transport protocol in the Internet, DCN challenges such as inadequate buffer space in switches and bandwidth limitations have prompted the researchers to propose techniques to improve TCP performance or design new transport protocols for DCN. Some of the most notable DCN architectures are the legacy three-tierfat-treeBCubeDCellVL2, and CamCube. In this section, we will see the two switch-centric DCN architectures; the widely deployed legacy three-tier architecture and the promising fat-tree architecture.

Three Tier Architecture

Three-tier switch architectures have been common practice in the data center environment for several years. The 3-tier architecture consists of three layers and namely core switches, aggregation/distribution switches and access switches. These devices are interconnected by pathways for redundancy which can create loops in the network. As part of the design, a protocol (Spanning Tree) that prevents looped paths is implemented. However, doing so deactivates all but the primary route. A backup path is then only brought up and utilized when the active path experiences an outage. 

 On this architecture with equipment now located anywhere in the data center, data traffic between two servers in a three-tier architecture may have to traverse in a north-south traffic (i.e., switch-to-switch) pattern through multiple switch layers(increased hope count), resulting in increased latency and network complexity. 

This architecture does not adequately support the high-bandwidth requirements of large virtualized data centers. This has led many data centers in moving to switch fabric architectures that are limited to just one or two tiers of switches. With fewer tiers of switches, server-to-server communication is improved by eliminating the need for communication to travel through multiple switch layers.

Fat-tree Architecture(Leaf spine architecture)

Fat-tree switch fabrics also referred to as leaf spine architecture or 2-tier leaf spine architecture is one of the most common switch fabrics being deployed in today's data center. The spine and leaf network design were originally implemented in data centers as a way to improve performance when handling the predominantly east-west traffic (traffic between devices in the data center). It does so largely by reducing the number of “hops” between any two devices in the network to just one because every leaf switch in the network has a direct connection to every spine switch. As configured for data centers, the leaf-spine architecture essentially collapses the core and aggregation layers into one layer – the spine – while the leaf layer is analogous to the access layer in the three-tier model. The Leaf layer consists of access switches that connect to devices like servers, firewalls, load balancers, and edge routers. The Spine layer (made up of switches that perform routing) is the backbone of the network, where every Leaf switch is interconnected with each and every Spine switch.


Working principles of Leaf spine architecture

I would suggest you to take a close look at the above picture in every statement below to understand the concept easily. 

In this two-tier Clos architecture, every lower-tier switch (leaf layer) is connected to each of the top-tier switches (spine layer) in a full-mesh topology. The leaf layer consists of access switches that connect to devices such as servers. The spine layer is the backbone of the network and is responsible for interconnecting all leaf switches. Every leaf switch connects to every spine switch in the fabric. The path is randomly chosen so that the traffic load is evenly distributed among the top-tier switches. If one of the top tier switches were to fail, it would only slightly degrade performance throughout the data center.

If oversubscription of a link occurs (that is, if more traffic is generated than can be aggregated on the active link at one time), the process for expanding capacity is straightforward. An additional spine switch can be added, and uplinks can be extended to every leaf switch, resulting in the addition of interlayer bandwidth and reduction of the oversubscription. If device port capacity becomes a concern, a new leaf switch can be added by connecting it to every spine switch and adding the network configuration to the switch. The ease of expansion optimizes the IT department’s process of scaling the network. If no oversubscription occurs between the lower-tier switches and their uplinks, then a nonblocking architecture can be achieved.

With a spine-and-leaf architecture, no matter which leaf switch to which a server is connected, its traffic always has to cross the same number of devices to get to another server (unless the other server is located on the same leaf). This approach keeps latency at a predictable level because a payload only has to hop to a spine switch and another leaf switch to reach its destination. This minimizes latency and bottlenecks because each payload only has to travel to a spine switch and another leaf switch to reach its endpoint. Spine switches have high port density and form the core of the architecture.

Special Notes

From a data center perspective, there are two types of network traffic which are North/South and East/West.
Generally speaking, "east-west" traffic refers to traffic within a data center -- i.e. server to server traffic. We can also say that traffic internal to the network that doesn't leave out of the data center, i.e. LAN client to server and server to server communications.

 "North-south" traffic is a client to server traffic, between the data center and the rest of the network (anything outside the data center). It can be also said that the traffic coming into and out of the network into Internet space, i.e. in and out of edge firewalls and/routers.

Advantages of a Leaf-Spine architecture

Reduced latency: Since the hope count of the network architecture is less than 3 tier architecture and hence the latency for data transfer is less than 3 tier.

Improved redundancy: In a leaf and spine architecture, any single leaf switch (which is roughly equivalent to access switches in the three-tier model) is connected to multiple spine switches. In a data center, each leaf switch may well connect to all other spine switches. This provides a superior level of redundancy vs. the three-tier model, which is typically implemented using the Spanning Tree Protocol (STP), to prevent network loops. STP allows for dual redundant paths between any two points, with only one of them active at any given time.

Leaf spine topologies provide for numerous paths between two points, typically implemented using protocols such as Transparent Interconnection of Lots of Links (TRILL) or Shortest Path Bridging (SPB). TRILL and SPB both allow traffic flows across all available routes, offering improved redundancy, but both, like STP, still prevent loops.

Improved performance: The ability to use multiple network paths at the same time also improves performance. With STP, if the only available path becomes congested, performance suffers. With TRILL and SPB able to use multiple routes, congestion is less of an issue. What’s more, having only a single hop between any two points likewise makes for a more direct network path, which can also improve performance.

Improved scalability: Leaf and spine topologies are also inherently scalable. Providing so many paths between any two network points, all of them available to carry traffic reduces the possibility of congestion even in a large network. Adding switches to a leaf-spine network provides additional traffic routes, thus increasing scalability.

Supports less expensive, fixed configuration switches: Fixed configuration switches are less costly than the modular, chassis switches that are often required with three-tier networks in order to provide the port density required to enable the appropriate number of connections between switches at different layers. The leaf-spine architecture enables all ports on a spine switch to support connections to leaf switches, instead of to other spine switches. Additionally, it enables connections to be spread among a large number of spine switches. Chassis can still be used, but they’re not required. That’s one reason the leaf-spine design is a good fit for white box networking.

Adaptable to the enterprise: While it’s true that the leaf-spine architecture was originally designed for data center networks, to address the east-west nature of traffic between servers and storage systems, the architecture can also be extended outside the data center to the enterprise network at large – bringing many of the same benefits and more.  

Leaf-Spine Concerns

There are some concerns around utilizing the Leaf-Spine network architecture. The first comes from the sheer amount of cable needed to connect each Spine with every Leaf. The cable glut will only worsen in time as new Leaf and Spine switches are added to expand capacity. Considerations should be given for where to strategically locate the Spine switches within a data center, especially for large deployments. This is to ensure cabling is planned, organized, and manageable when scaling out capacity as the network grows.

The other major disadvantage comes from the use of Layer 3 routing. This eliminates the spanning of VLANs (Virtual LAN) across a network. VLANs in a Leaf-Spine network are localized to each individual Leaf switch; any VLAN segments that are left on a Leaf switch are not accessible by the other Leafs. This can create issues with a scenario such as guest virtual machine mobility within a data center.

Summary

Three-tier switch architectures have been common practice in the data center environment for several years. However, this architecture does not adequately support the low-latency, high-bandwidth requirements of large virtualized data centers. With the equipment now located anywhere in the data center, data traffic between two servers in a three-tier architecture may have to traverse in an east-west pattern through multiple switch layers, resulting in increased latency and network complexity. This has many data centers moving to switch fabric architectures that are limited to just one or two tiers of switches. With fewer tiers of switches, server-to-server communication is improved by eliminating the need for communication to travel through multiple switch layers.

Have a comment or points to be reviewed? Let us grow together. Feel free to comment.


1 comment:

  1. Insightful discussion on switching architectures in a data center! Fira Code can enhance coding efficiency in such environments. Thanks for sharing!

    ReplyDelete