Centralised control is a feature of the SDN concept, but when spread over an intercontinental WAN it made more sense to have central controllers at each datacenter linked into an overall traffic engineering controller for the whole WAN. This is analogous to having an intelligent police presence directing traffic at each major road junction - providing fast, low latency decision making - with each linked by radio to a central traffic office that optimizes overall traffic strategy and passes instructions back to the police on duty. In addition, multiple OpenFlow controllers ensured no single point of failure.
The centralised traffic engineering (TE) service collects real-time use and topology data from the network and calculates bandwidth demand from the applications and services. It then computes the best traffic flow path assignments and uses the OpenFlow protocol to program those into the switches. As demand fluctuates, or unanticipated events happen in the network, the TE service re-computes and reprograms the system.
With this OpenFlow control system in place, we now had enormous potential for radical re-thinking and experimentation with new networking concepts, but chose instead to begin with a fairly traditional Quagga BGP [Border Gateway Protocol] and ISIS system compatible with existing networks and familiar to the operations staff. The technology was new, so it was a safer bet to begin with a traditional feature set and then evolve from there, using the network's software definable capability. One initial advantage was that, as an internal network, it did not need to carry the full Internet routing table.
Fig 2 summarises the approach: centralized traffic engineering (TE) layered on top of base system and connected with high traffic priority to OpenFlow controllers (OFC) at each site.
The first quarter of 2011 saw the SDN WAN up and running, and it worked! From that point the expansion has been ongoing, adding sites, increasing capacity and adding features.
[Turn to next page]