Simplifying Data Center Interconnect with 128 Technology

Data center failover is an expensive, complex, and sometimes fragile component of a network design. Solving this one problem usually involves almost every other team in the IT department, and it’s inexorably linked with the very day-to-day operation of an organization.

How will a business recover from a data center outage?

How can mission critical applications move seamlessly between data centers?

How will our end-users reach an application in the event of a failover?

These are just a few high level questions that, along with very technical and legal requirements, will guide the actual design of a data center failover plan. The answers will determine bandwidth, routing protocols, storage, virtual environments, security, hardware platforms, and every minutia of design right down to how DNS will be propagated and what OSPF metrics are set to.

This is an expensive process both in time and resources. Depending on the business requirements, complexity can become unwieldy, and IT often incurs great technical debt.

The data center interconnection, or DCI, is the backbone for data center failover activity. Great pains are taken to research this one piece of the disaster recovery workflow. The problems are that high bandwidth links are expensive, and the actual design of how to use this connection (or multiple connections) to maintain a standby backup data center is very complex. Even more complex is maintaining a load-balanced active/active data center environment.

Typically, a DCI utilizes dark fiber or expensive leased lines from an ISP to extend the private network of one data center to the other. From a technical perspective, DC failover usually requires additional firewalls, heavy duty routers, load balancers, switches, and all sorts of complex configuration.

Now imagine the wasted resources in an active/standby data center design. In spite of the inefficiency of this design, sometimes it’s just too cost prohibitive or operationally complex to build and run active/active data canters.

There must be an easier way…

An SD-WAN is typically looked at only as a WAN technology, sometimes considered a replacement for a company’s core and branch routers, but SD-WAN technology can be used anywhere devices need to communicate with one another. Another use case to consider is SD-WAN between data centers.

128 Technology solves the DCI problem not by selling a warehouse full of routers, switches, and firewalls in a complex web of configuration, but by leveraging secure vector routing and a management overlay called the 128T Conductor.

Secure vector routing, or SVR, is 128T’s method to associate individual packets with a traffic session using NAT rather than tunnels for traffic forwarding. This may seem more reminiscent of a stateful firewall, but 128T uses sessions for routing in particular, something very different from other SD-WAN vendors. Session-based routing itself isn’t brand new, but what what 128T does with it can be very compelling to a network architect trying to solve the DCI problem.

For example, when a 128T router at one data center forwards server traffic to another 128T router (called a peer) in the secondary data center, the first router immediately recognizes the traffic based on traditional 5-tuple information such as source and destination IP, port, and so on. The first packet of the transmission is then assigned a session ID which is mapped to some pre-defined policy determining SLAs, payload encryption, a NAT policy, and the appropriate path vector, or physical path, for the traffic.

The use of secure vector routing and NAT within the 128T router mesh means a significant amount of resources are saved by not using a tunneling mechanism to forward traffic. This, along with session-based path vectoring means a tremendous amount of bandwidth can be spared on DCI links and far fewer devices need to be deployed for high performing, secure connectivity.

Also, because 128T’s routers by default deny traffic not associated with a session, a 128T router also acts as a basic firewall. Since SVR utilizes NAT instead of tunnels, there is no need for an additional NAT device whether that be another router or a dedicated firewall. In other words, a 128T network eliminates many common middleboxes we’ve come to expect in complex network designs.

Though SD-WAN solutions work well for aggregating multiple links to connect branches to each other and a to central hub, it’s often at the cost of reducing bandwidth on the links due to tunneling overhead. Also, many SD-WAN platforms don’t provide much in the way of security which means separate firewalls must still be deployed.

This is what makes 128 Technology unique in the SD-WAN space and a good option to consider specifically for DCI connectivity. The 128T ecosystem of routers and centralized management can provide much less complex, high quality connectivity between data centers in a secure fashion without sacrificing a lot of bandwidth on the links themselves.

Typically, a DCI is made from a couple or a few redundant links to exchange state information, exchange routing tables, replicate virtual servers, and move actual data traffic. Using a solution like 128 Technology means specific workloads don’t have to be pinned to specific links. Using intelligent session-based routing with devices that share state tables in a full mesh means links can be aggregated and intelligent forwarding decisions can be made as we’ve come to expect from an SD-WAN – but without the overhead.

Ultimately, this makes data center failover designs less complex and potentially less expensive as it eliminates an array of middleboxes, complex configuration and overlays, and all the associated costs with planning, procuring, building, and operating a complex data center failover design. Especially in active/active designs that load-balance production applications, the 128T routing paradigm is a compelling design option.

128T certainly can’t eliminate all the complexity in designing around DCI, but it can make it much simpler.

Thanks,

Phil

 

128 Technology presented at a recent Tech Field Day Exclusive in Burlington, Massachusetts, where they went right into the weeds on how their technology works, what differentiates them from other SD-WAN vendors, and how some of their partners are leveraging their intelligent routing platform to solve very real business problems. 

Watch the videos here, and always feel free to contact me with questions, comments, or corrections.

 

 

 

 

 

One thought on “Simplifying Data Center Interconnect with 128 Technology

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑